00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3693 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3294 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.080 Using shallow fetch with depth 1 00:00:00.080 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.080 > git --version # timeout=10 00:00:00.105 > git --version # 'git version 2.39.2' 00:00:00.105 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.123 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.123 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.536 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.545 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.555 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:03.555 > git config core.sparsecheckout # timeout=10 00:00:03.563 > git read-tree -mu HEAD # timeout=10 00:00:03.577 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:03.619 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:03.619 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:03.699 [Pipeline] Start of Pipeline 00:00:03.711 [Pipeline] library 00:00:03.712 Loading library shm_lib@master 00:00:03.713 Library shm_lib@master is cached. Copying from home. 00:00:03.723 [Pipeline] node 00:00:03.733 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.734 [Pipeline] { 00:00:03.741 [Pipeline] catchError 00:00:03.742 [Pipeline] { 00:00:03.750 [Pipeline] wrap 00:00:03.756 [Pipeline] { 00:00:03.762 [Pipeline] stage 00:00:03.763 [Pipeline] { (Prologue) 00:00:03.775 [Pipeline] echo 00:00:03.776 Node: VM-host-SM9 00:00:03.779 [Pipeline] cleanWs 00:00:03.787 [WS-CLEANUP] Deleting project workspace... 00:00:03.797 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.802 [WS-CLEANUP] done 00:00:03.970 [Pipeline] setCustomBuildProperty 00:00:04.050 [Pipeline] httpRequest 00:00:04.066 [Pipeline] echo 00:00:04.067 Sorcerer 10.211.164.101 is alive 00:00:04.072 [Pipeline] httpRequest 00:00:04.076 HttpMethod: GET 00:00:04.076 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.077 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.083 Response Code: HTTP/1.1 200 OK 00:00:04.083 Success: Status code 200 is in the accepted range: 200,404 00:00:04.084 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.331 [Pipeline] sh 00:00:06.609 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.620 [Pipeline] httpRequest 00:00:06.643 [Pipeline] echo 00:00:06.644 Sorcerer 10.211.164.101 is alive 00:00:06.650 [Pipeline] httpRequest 00:00:06.653 HttpMethod: GET 00:00:06.653 URL: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:06.654 Sending request to url: http://10.211.164.101/packages/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:06.666 Response Code: HTTP/1.1 200 OK 00:00:06.666 Success: Status code 200 is in the accepted range: 200,404 00:00:06.667 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:55.694 [Pipeline] sh 00:00:55.974 + tar --no-same-owner -xf spdk_d005e023bd514d7d48470775331498120af1a8d8.tar.gz 00:00:59.270 [Pipeline] sh 00:00:59.549 + git -C spdk log --oneline -n5 00:00:59.549 d005e023b raid: fix empty slot not updated in sb after resize 00:00:59.549 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:59.549 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:59.549 19f5787c8 raid: skip configured base bdevs in sb examine 00:00:59.549 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:00:59.568 [Pipeline] withCredentials 00:00:59.576 > git --version # timeout=10 00:00:59.587 > git --version # 'git version 2.39.2' 00:00:59.603 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:59.605 [Pipeline] { 00:00:59.614 [Pipeline] retry 00:00:59.616 [Pipeline] { 00:00:59.632 [Pipeline] sh 00:00:59.911 + git ls-remote http://dpdk.org/git/dpdk main 00:01:01.825 [Pipeline] } 00:01:01.845 [Pipeline] // retry 00:01:01.851 [Pipeline] } 00:01:01.870 [Pipeline] // withCredentials 00:01:01.879 [Pipeline] httpRequest 00:01:01.905 [Pipeline] echo 00:01:01.907 Sorcerer 10.211.164.101 is alive 00:01:01.915 [Pipeline] httpRequest 00:01:01.919 HttpMethod: GET 00:01:01.920 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:01.920 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:01.932 Response Code: HTTP/1.1 200 OK 00:01:01.933 Success: Status code 200 is in the accepted range: 200,404 00:01:01.933 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:27.736 [Pipeline] sh 00:01:28.017 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:29.425 [Pipeline] sh 00:01:29.705 + git -C dpdk log --oneline -n5 00:01:29.705 82c47f005b version: 24.07-rc3 00:01:29.705 d9d1be537e doc: remove reference to mbuf pkt field 00:01:29.705 52c7393a03 doc: set required MinGW version in Windows guide 00:01:29.705 92439dc9ac dts: improve starting and stopping interactive shells 00:01:29.705 2b648cd4e4 dts: add context manager for interactive shells 00:01:29.722 [Pipeline] writeFile 00:01:29.737 [Pipeline] sh 00:01:30.017 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.028 [Pipeline] sh 00:01:30.307 + cat autorun-spdk.conf 00:01:30.307 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.307 SPDK_TEST_NVMF=1 00:01:30.307 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.307 SPDK_TEST_URING=1 00:01:30.307 SPDK_TEST_USDT=1 00:01:30.307 SPDK_RUN_UBSAN=1 00:01:30.307 NET_TYPE=virt 00:01:30.307 SPDK_TEST_NATIVE_DPDK=main 00:01:30.307 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:30.307 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.314 RUN_NIGHTLY=1 00:01:30.316 [Pipeline] } 00:01:30.331 [Pipeline] // stage 00:01:30.345 [Pipeline] stage 00:01:30.347 [Pipeline] { (Run VM) 00:01:30.361 [Pipeline] sh 00:01:30.641 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:30.641 + echo 'Start stage prepare_nvme.sh' 00:01:30.641 Start stage prepare_nvme.sh 00:01:30.641 + [[ -n 5 ]] 00:01:30.641 + disk_prefix=ex5 00:01:30.641 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:30.641 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:30.641 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:30.641 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.641 ++ SPDK_TEST_NVMF=1 00:01:30.641 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.641 ++ SPDK_TEST_URING=1 00:01:30.641 ++ SPDK_TEST_USDT=1 00:01:30.641 ++ SPDK_RUN_UBSAN=1 00:01:30.641 ++ NET_TYPE=virt 00:01:30.641 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:30.641 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:30.641 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.641 ++ RUN_NIGHTLY=1 00:01:30.641 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:30.641 + nvme_files=() 00:01:30.641 + declare -A nvme_files 00:01:30.641 + backend_dir=/var/lib/libvirt/images/backends 00:01:30.641 + nvme_files['nvme.img']=5G 00:01:30.641 + nvme_files['nvme-cmb.img']=5G 00:01:30.641 + nvme_files['nvme-multi0.img']=4G 00:01:30.641 + nvme_files['nvme-multi1.img']=4G 00:01:30.641 + nvme_files['nvme-multi2.img']=4G 00:01:30.641 + nvme_files['nvme-openstack.img']=8G 00:01:30.641 + nvme_files['nvme-zns.img']=5G 00:01:30.641 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:30.641 + (( SPDK_TEST_FTL == 1 )) 00:01:30.641 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:30.641 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:30.641 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:30.641 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:30.641 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:30.641 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:30.641 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:30.641 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.641 + for nvme in "${!nvme_files[@]}" 00:01:30.642 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:30.900 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.900 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:30.900 + echo 'End stage prepare_nvme.sh' 00:01:30.900 End stage prepare_nvme.sh 00:01:30.912 [Pipeline] sh 00:01:31.192 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.192 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:31.192 00:01:31.192 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:31.193 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:31.193 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:31.193 HELP=0 00:01:31.193 DRY_RUN=0 00:01:31.193 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:31.193 NVME_DISKS_TYPE=nvme,nvme, 00:01:31.193 NVME_AUTO_CREATE=0 00:01:31.193 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:31.193 NVME_CMB=,, 00:01:31.193 NVME_PMR=,, 00:01:31.193 NVME_ZNS=,, 00:01:31.193 NVME_MS=,, 00:01:31.193 NVME_FDP=,, 00:01:31.193 SPDK_VAGRANT_DISTRO=fedora38 00:01:31.193 SPDK_VAGRANT_VMCPU=10 00:01:31.193 SPDK_VAGRANT_VMRAM=12288 00:01:31.193 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.193 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.193 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.193 SPDK_OPENSTACK_NETWORK=0 00:01:31.193 VAGRANT_PACKAGE_BOX=0 00:01:31.193 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.193 FORCE_DISTRO=true 00:01:31.193 VAGRANT_BOX_VERSION= 00:01:31.193 EXTRA_VAGRANTFILES= 00:01:31.193 NIC_MODEL=e1000 00:01:31.193 00:01:31.193 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:31.193 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:33.727 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.294 ==> default: Creating image (snapshot of base box volume). 00:01:34.294 ==> default: Creating domain with the following settings... 00:01:34.294 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721871889_634f0a330f2e67647569 00:01:34.294 ==> default: -- Domain type: kvm 00:01:34.294 ==> default: -- Cpus: 10 00:01:34.294 ==> default: -- Feature: acpi 00:01:34.294 ==> default: -- Feature: apic 00:01:34.294 ==> default: -- Feature: pae 00:01:34.294 ==> default: -- Memory: 12288M 00:01:34.294 ==> default: -- Memory Backing: hugepages: 00:01:34.294 ==> default: -- Management MAC: 00:01:34.294 ==> default: -- Loader: 00:01:34.294 ==> default: -- Nvram: 00:01:34.294 ==> default: -- Base box: spdk/fedora38 00:01:34.294 ==> default: -- Storage pool: default 00:01:34.294 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721871889_634f0a330f2e67647569.img (20G) 00:01:34.294 ==> default: -- Volume Cache: default 00:01:34.294 ==> default: -- Kernel: 00:01:34.294 ==> default: -- Initrd: 00:01:34.294 ==> default: -- Graphics Type: vnc 00:01:34.294 ==> default: -- Graphics Port: -1 00:01:34.294 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.294 ==> default: -- Graphics Password: Not defined 00:01:34.294 ==> default: -- Video Type: cirrus 00:01:34.294 ==> default: -- Video VRAM: 9216 00:01:34.294 ==> default: -- Sound Type: 00:01:34.294 ==> default: -- Keymap: en-us 00:01:34.294 ==> default: -- TPM Path: 00:01:34.294 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.294 ==> default: -- Command line args: 00:01:34.294 ==> default: -> value=-device, 00:01:34.294 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.294 ==> default: -> value=-drive, 00:01:34.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.294 ==> default: -> value=-device, 00:01:34.294 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.294 ==> default: -> value=-device, 00:01:34.294 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.294 ==> default: -> value=-drive, 00:01:34.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:34.294 ==> default: -> value=-device, 00:01:34.294 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.294 ==> default: -> value=-drive, 00:01:34.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:34.294 ==> default: -> value=-device, 00:01:34.294 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.294 ==> default: -> value=-drive, 00:01:34.294 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:34.294 ==> default: -> value=-device, 00:01:34.294 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.553 ==> default: Creating shared folders metadata... 00:01:34.553 ==> default: Starting domain. 00:01:35.931 ==> default: Waiting for domain to get an IP address... 00:01:50.837 ==> default: Waiting for SSH to become available... 00:01:52.213 ==> default: Configuring and enabling network interfaces... 00:01:56.416 default: SSH address: 192.168.121.189:22 00:01:56.416 default: SSH username: vagrant 00:01:56.416 default: SSH auth method: private key 00:01:58.332 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.440 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:11.703 ==> default: Mounting SSHFS shared folder... 00:02:13.603 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:13.603 ==> default: Checking Mount.. 00:02:14.538 ==> default: Folder Successfully Mounted! 00:02:14.538 ==> default: Running provisioner: file... 00:02:15.482 default: ~/.gitconfig => .gitconfig 00:02:15.740 00:02:15.740 SUCCESS! 00:02:15.740 00:02:15.740 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:15.740 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.740 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:15.740 00:02:15.750 [Pipeline] } 00:02:15.768 [Pipeline] // stage 00:02:15.777 [Pipeline] dir 00:02:15.777 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:15.779 [Pipeline] { 00:02:15.793 [Pipeline] catchError 00:02:15.795 [Pipeline] { 00:02:15.809 [Pipeline] sh 00:02:16.089 + vagrant ssh-config --host vagrant 00:02:16.089 + sed -ne /^Host/,$p 00:02:16.089 + tee ssh_conf 00:02:20.277 Host vagrant 00:02:20.277 HostName 192.168.121.189 00:02:20.277 User vagrant 00:02:20.277 Port 22 00:02:20.277 UserKnownHostsFile /dev/null 00:02:20.277 StrictHostKeyChecking no 00:02:20.277 PasswordAuthentication no 00:02:20.277 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:20.277 IdentitiesOnly yes 00:02:20.277 LogLevel FATAL 00:02:20.278 ForwardAgent yes 00:02:20.278 ForwardX11 yes 00:02:20.278 00:02:20.291 [Pipeline] withEnv 00:02:20.293 [Pipeline] { 00:02:20.308 [Pipeline] sh 00:02:20.586 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:20.586 source /etc/os-release 00:02:20.586 [[ -e /image.version ]] && img=$(< /image.version) 00:02:20.586 # Minimal, systemd-like check. 00:02:20.586 if [[ -e /.dockerenv ]]; then 00:02:20.586 # Clear garbage from the node's name: 00:02:20.586 # agt-er_autotest_547-896 -> autotest_547-896 00:02:20.586 # $HOSTNAME is the actual container id 00:02:20.586 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:20.586 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:20.586 # We can assume this is a mount from a host where container is running, 00:02:20.587 # so fetch its hostname to easily identify the target swarm worker. 00:02:20.587 container="$(< /etc/hostname) ($agent)" 00:02:20.587 else 00:02:20.587 # Fallback 00:02:20.587 container=$agent 00:02:20.587 fi 00:02:20.587 fi 00:02:20.587 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:20.587 00:02:20.597 [Pipeline] } 00:02:20.618 [Pipeline] // withEnv 00:02:20.627 [Pipeline] setCustomBuildProperty 00:02:20.645 [Pipeline] stage 00:02:20.647 [Pipeline] { (Tests) 00:02:20.668 [Pipeline] sh 00:02:20.949 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:21.221 [Pipeline] sh 00:02:21.521 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:21.536 [Pipeline] timeout 00:02:21.537 Timeout set to expire in 30 min 00:02:21.539 [Pipeline] { 00:02:21.555 [Pipeline] sh 00:02:21.853 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:22.420 HEAD is now at d005e023b raid: fix empty slot not updated in sb after resize 00:02:22.433 [Pipeline] sh 00:02:22.714 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:22.986 [Pipeline] sh 00:02:23.265 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:23.281 [Pipeline] sh 00:02:23.560 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:23.560 ++ readlink -f spdk_repo 00:02:23.560 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:23.560 + [[ -n /home/vagrant/spdk_repo ]] 00:02:23.560 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:23.560 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:23.560 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:23.560 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:23.560 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:23.560 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:23.560 + cd /home/vagrant/spdk_repo 00:02:23.560 + source /etc/os-release 00:02:23.560 ++ NAME='Fedora Linux' 00:02:23.560 ++ VERSION='38 (Cloud Edition)' 00:02:23.560 ++ ID=fedora 00:02:23.560 ++ VERSION_ID=38 00:02:23.560 ++ VERSION_CODENAME= 00:02:23.560 ++ PLATFORM_ID=platform:f38 00:02:23.560 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:23.560 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.560 ++ LOGO=fedora-logo-icon 00:02:23.560 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:23.560 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.560 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:23.560 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.560 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.560 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.560 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:23.560 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.560 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:23.560 ++ SUPPORT_END=2024-05-14 00:02:23.560 ++ VARIANT='Cloud Edition' 00:02:23.560 ++ VARIANT_ID=cloud 00:02:23.560 + uname -a 00:02:23.560 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:23.819 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:24.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:24.077 Hugepages 00:02:24.077 node hugesize free / total 00:02:24.077 node0 1048576kB 0 / 0 00:02:24.077 node0 2048kB 0 / 0 00:02:24.077 00:02:24.077 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.077 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:24.077 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:24.336 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:24.336 + rm -f /tmp/spdk-ld-path 00:02:24.336 + source autorun-spdk.conf 00:02:24.336 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.336 ++ SPDK_TEST_NVMF=1 00:02:24.336 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.336 ++ SPDK_TEST_URING=1 00:02:24.336 ++ SPDK_TEST_USDT=1 00:02:24.336 ++ SPDK_RUN_UBSAN=1 00:02:24.336 ++ NET_TYPE=virt 00:02:24.336 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:24.336 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:24.336 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.336 ++ RUN_NIGHTLY=1 00:02:24.336 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:24.336 + [[ -n '' ]] 00:02:24.336 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:24.336 + for M in /var/spdk/build-*-manifest.txt 00:02:24.336 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:24.336 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.336 + for M in /var/spdk/build-*-manifest.txt 00:02:24.336 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:24.336 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.336 ++ uname 00:02:24.336 + [[ Linux == \L\i\n\u\x ]] 00:02:24.336 + sudo dmesg -T 00:02:24.336 + sudo dmesg --clear 00:02:24.336 + dmesg_pid=5894 00:02:24.336 + [[ Fedora Linux == FreeBSD ]] 00:02:24.336 + sudo dmesg -Tw 00:02:24.336 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.336 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.336 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:24.336 + [[ -x /usr/src/fio-static/fio ]] 00:02:24.336 + export FIO_BIN=/usr/src/fio-static/fio 00:02:24.336 + FIO_BIN=/usr/src/fio-static/fio 00:02:24.336 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:24.336 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:24.336 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:24.336 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.336 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.336 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:24.336 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.336 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.336 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:24.336 Test configuration: 00:02:24.336 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.336 SPDK_TEST_NVMF=1 00:02:24.336 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.336 SPDK_TEST_URING=1 00:02:24.336 SPDK_TEST_USDT=1 00:02:24.336 SPDK_RUN_UBSAN=1 00:02:24.336 NET_TYPE=virt 00:02:24.336 SPDK_TEST_NATIVE_DPDK=main 00:02:24.336 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:24.336 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.336 RUN_NIGHTLY=1 01:45:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:24.336 01:45:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:24.336 01:45:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.336 01:45:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.336 01:45:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.336 01:45:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.336 01:45:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.336 01:45:39 -- paths/export.sh@5 -- $ export PATH 00:02:24.336 01:45:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.336 01:45:39 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:24.336 01:45:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:24.336 01:45:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721871939.XXXXXX 00:02:24.336 01:45:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721871939.UvYz8Y 00:02:24.336 01:45:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:24.336 01:45:39 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:02:24.336 01:45:39 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:24.336 01:45:39 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:24.336 01:45:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:24.336 01:45:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:24.336 01:45:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:24.336 01:45:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:24.336 01:45:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.594 01:45:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:24.594 01:45:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:24.594 01:45:39 -- pm/common@17 -- $ local monitor 00:02:24.594 01:45:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.594 01:45:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.594 01:45:39 -- pm/common@25 -- $ sleep 1 00:02:24.594 01:45:39 -- pm/common@21 -- $ date +%s 00:02:24.594 01:45:39 -- pm/common@21 -- $ date +%s 00:02:24.594 01:45:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721871939 00:02:24.594 01:45:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721871939 00:02:24.594 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721871939_collect-vmstat.pm.log 00:02:24.594 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721871939_collect-cpu-load.pm.log 00:02:25.529 01:45:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:25.529 01:45:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.529 01:45:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.529 01:45:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:25.529 01:45:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.529 Thu Jul 25 01:45:40 AM UTC 2024 00:02:25.529 01:45:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.529 v24.09-pre-318-gd005e023b 00:02:25.529 01:45:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:25.529 01:45:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:25.529 01:45:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:25.529 01:45:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:25.529 01:45:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:25.529 01:45:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.529 ************************************ 00:02:25.529 START TEST ubsan 00:02:25.529 ************************************ 00:02:25.529 using ubsan 00:02:25.529 01:45:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:25.529 00:02:25.529 real 0m0.000s 00:02:25.529 user 0m0.000s 00:02:25.529 sys 0m0.000s 00:02:25.529 01:45:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.529 01:45:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.529 ************************************ 00:02:25.529 END TEST ubsan 00:02:25.529 ************************************ 00:02:25.529 01:45:40 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:25.529 01:45:40 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:25.529 01:45:40 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:25.529 01:45:40 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:25.529 01:45:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:25.529 01:45:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.529 ************************************ 00:02:25.529 START TEST build_native_dpdk 00:02:25.529 ************************************ 00:02:25.529 01:45:40 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:25.529 82c47f005b version: 24.07-rc3 00:02:25.529 d9d1be537e doc: remove reference to mbuf pkt field 00:02:25.529 52c7393a03 doc: set required MinGW version in Windows guide 00:02:25.529 92439dc9ac dts: improve starting and stopping interactive shells 00:02:25.529 2b648cd4e4 dts: add context manager for interactive shells 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:25.529 01:45:40 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:25.529 01:45:40 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:25.530 01:45:40 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:25.530 patching file config/rte_config.h 00:02:25.530 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:25.530 01:45:40 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:02:25.530 01:45:40 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:02:25.789 01:45:40 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:25.789 01:45:40 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:25.789 01:45:40 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:25.789 01:45:40 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:25.789 01:45:40 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:25.789 01:45:40 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:31.054 The Meson build system 00:02:31.054 Version: 1.3.1 00:02:31.054 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:31.054 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:31.054 Build type: native build 00:02:31.054 Program cat found: YES (/usr/bin/cat) 00:02:31.054 Project name: DPDK 00:02:31.054 Project version: 24.07.0-rc3 00:02:31.054 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.054 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:31.054 Host machine cpu family: x86_64 00:02:31.054 Host machine cpu: x86_64 00:02:31.054 Message: ## Building in Developer Mode ## 00:02:31.054 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.054 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:31.054 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.054 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:31.054 Program cat found: YES (/usr/bin/cat) 00:02:31.054 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:31.054 Compiler for C supports arguments -march=native: YES 00:02:31.054 Checking for size of "void *" : 8 00:02:31.054 Checking for size of "void *" : 8 (cached) 00:02:31.054 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:31.054 Library m found: YES 00:02:31.054 Library numa found: YES 00:02:31.054 Has header "numaif.h" : YES 00:02:31.054 Library fdt found: NO 00:02:31.054 Library execinfo found: NO 00:02:31.054 Has header "execinfo.h" : YES 00:02:31.054 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.054 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.054 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.054 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.054 Run-time dependency openssl found: YES 3.0.9 00:02:31.054 Run-time dependency libpcap found: YES 1.10.4 00:02:31.054 Has header "pcap.h" with dependency libpcap: YES 00:02:31.054 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.054 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.054 Compiler for C supports arguments -Wformat: YES 00:02:31.054 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.054 Compiler for C supports arguments -Wformat-security: NO 00:02:31.054 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.054 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.054 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.054 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.054 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.054 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.054 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.054 Compiler for C supports arguments -Wundef: YES 00:02:31.054 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.054 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.055 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.055 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.055 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.055 Program objdump found: YES (/usr/bin/objdump) 00:02:31.055 Compiler for C supports arguments -mavx512f: YES 00:02:31.055 Checking if "AVX512 checking" compiles: YES 00:02:31.055 Fetching value of define "__SSE4_2__" : 1 00:02:31.055 Fetching value of define "__AES__" : 1 00:02:31.055 Fetching value of define "__AVX__" : 1 00:02:31.055 Fetching value of define "__AVX2__" : 1 00:02:31.055 Fetching value of define "__AVX512BW__" : (undefined) 00:02:31.055 Fetching value of define "__AVX512CD__" : (undefined) 00:02:31.055 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:31.055 Fetching value of define "__AVX512F__" : (undefined) 00:02:31.055 Fetching value of define "__AVX512VL__" : (undefined) 00:02:31.055 Fetching value of define "__PCLMUL__" : 1 00:02:31.055 Fetching value of define "__RDRND__" : 1 00:02:31.055 Fetching value of define "__RDSEED__" : 1 00:02:31.055 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.055 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.055 Message: lib/log: Defining dependency "log" 00:02:31.055 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.055 Message: lib/argparse: Defining dependency "argparse" 00:02:31.055 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.055 Checking for function "getentropy" : NO 00:02:31.055 Message: lib/eal: Defining dependency "eal" 00:02:31.055 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:31.055 Message: lib/ring: Defining dependency "ring" 00:02:31.055 Message: lib/rcu: Defining dependency "rcu" 00:02:31.055 Message: lib/mempool: Defining dependency "mempool" 00:02:31.055 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.055 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.055 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.055 Compiler for C supports arguments -mpclmul: YES 00:02:31.055 Compiler for C supports arguments -maes: YES 00:02:31.055 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.055 Compiler for C supports arguments -mavx512bw: YES 00:02:31.055 Compiler for C supports arguments -mavx512dq: YES 00:02:31.055 Compiler for C supports arguments -mavx512vl: YES 00:02:31.055 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.055 Compiler for C supports arguments -mavx2: YES 00:02:31.055 Compiler for C supports arguments -mavx: YES 00:02:31.055 Message: lib/net: Defining dependency "net" 00:02:31.055 Message: lib/meter: Defining dependency "meter" 00:02:31.055 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.055 Message: lib/pci: Defining dependency "pci" 00:02:31.055 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.055 Message: lib/metrics: Defining dependency "metrics" 00:02:31.055 Message: lib/hash: Defining dependency "hash" 00:02:31.055 Message: lib/timer: Defining dependency "timer" 00:02:31.055 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.055 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:31.055 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:31.055 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:31.055 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:31.055 Message: lib/acl: Defining dependency "acl" 00:02:31.055 Message: lib/bbdev: Defining dependency "bbdev" 00:02:31.055 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:31.055 Run-time dependency libelf found: YES 0.190 00:02:31.055 Message: lib/bpf: Defining dependency "bpf" 00:02:31.055 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:31.055 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.055 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.055 Message: lib/distributor: Defining dependency "distributor" 00:02:31.055 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.055 Message: lib/efd: Defining dependency "efd" 00:02:31.055 Message: lib/eventdev: Defining dependency "eventdev" 00:02:31.055 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:31.055 Message: lib/gpudev: Defining dependency "gpudev" 00:02:31.055 Message: lib/gro: Defining dependency "gro" 00:02:31.055 Message: lib/gso: Defining dependency "gso" 00:02:31.055 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:31.055 Message: lib/jobstats: Defining dependency "jobstats" 00:02:31.055 Message: lib/latencystats: Defining dependency "latencystats" 00:02:31.055 Message: lib/lpm: Defining dependency "lpm" 00:02:31.055 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.055 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:31.055 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:31.055 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:31.055 Message: lib/member: Defining dependency "member" 00:02:31.055 Message: lib/pcapng: Defining dependency "pcapng" 00:02:31.055 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.055 Message: lib/power: Defining dependency "power" 00:02:31.055 Message: lib/rawdev: Defining dependency "rawdev" 00:02:31.055 Message: lib/regexdev: Defining dependency "regexdev" 00:02:31.055 Message: lib/mldev: Defining dependency "mldev" 00:02:31.055 Message: lib/rib: Defining dependency "rib" 00:02:31.055 Message: lib/reorder: Defining dependency "reorder" 00:02:31.055 Message: lib/sched: Defining dependency "sched" 00:02:31.055 Message: lib/security: Defining dependency "security" 00:02:31.055 Message: lib/stack: Defining dependency "stack" 00:02:31.055 Has header "linux/userfaultfd.h" : YES 00:02:31.055 Has header "linux/vduse.h" : YES 00:02:31.055 Message: lib/vhost: Defining dependency "vhost" 00:02:31.055 Message: lib/ipsec: Defining dependency "ipsec" 00:02:31.055 Message: lib/pdcp: Defining dependency "pdcp" 00:02:31.055 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.055 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:31.055 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:31.055 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:31.055 Message: lib/fib: Defining dependency "fib" 00:02:31.055 Message: lib/port: Defining dependency "port" 00:02:31.055 Message: lib/pdump: Defining dependency "pdump" 00:02:31.055 Message: lib/table: Defining dependency "table" 00:02:31.055 Message: lib/pipeline: Defining dependency "pipeline" 00:02:31.055 Message: lib/graph: Defining dependency "graph" 00:02:31.055 Message: lib/node: Defining dependency "node" 00:02:31.055 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.438 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.438 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.438 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.438 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:32.438 Compiler for C supports arguments -Wno-unused-value: YES 00:02:32.438 Compiler for C supports arguments -Wno-format: YES 00:02:32.438 Compiler for C supports arguments -Wno-format-security: YES 00:02:32.438 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:32.438 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:32.438 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:32.438 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:32.438 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.438 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.438 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:32.438 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:32.438 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:32.438 Has header "sys/epoll.h" : YES 00:02:32.438 Program doxygen found: YES (/usr/bin/doxygen) 00:02:32.438 Configuring doxy-api-html.conf using configuration 00:02:32.438 Configuring doxy-api-man.conf using configuration 00:02:32.438 Program mandb found: YES (/usr/bin/mandb) 00:02:32.438 Program sphinx-build found: NO 00:02:32.438 Configuring rte_build_config.h using configuration 00:02:32.438 Message: 00:02:32.438 ================= 00:02:32.438 Applications Enabled 00:02:32.438 ================= 00:02:32.438 00:02:32.438 apps: 00:02:32.438 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:32.438 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:32.438 test-pmd, test-regex, test-sad, test-security-perf, 00:02:32.438 00:02:32.438 Message: 00:02:32.439 ================= 00:02:32.439 Libraries Enabled 00:02:32.439 ================= 00:02:32.439 00:02:32.439 libs: 00:02:32.439 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:32.439 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:32.439 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:32.439 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:32.439 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:32.439 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:32.439 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:32.439 graph, node, 00:02:32.439 00:02:32.439 Message: 00:02:32.439 =============== 00:02:32.439 Drivers Enabled 00:02:32.439 =============== 00:02:32.439 00:02:32.439 common: 00:02:32.439 00:02:32.439 bus: 00:02:32.439 pci, vdev, 00:02:32.439 mempool: 00:02:32.439 ring, 00:02:32.439 dma: 00:02:32.439 00:02:32.439 net: 00:02:32.439 i40e, 00:02:32.439 raw: 00:02:32.439 00:02:32.439 crypto: 00:02:32.439 00:02:32.439 compress: 00:02:32.439 00:02:32.439 regex: 00:02:32.439 00:02:32.439 ml: 00:02:32.439 00:02:32.439 vdpa: 00:02:32.439 00:02:32.439 event: 00:02:32.439 00:02:32.439 baseband: 00:02:32.439 00:02:32.439 gpu: 00:02:32.439 00:02:32.439 00:02:32.439 Message: 00:02:32.439 ================= 00:02:32.439 Content Skipped 00:02:32.439 ================= 00:02:32.439 00:02:32.439 apps: 00:02:32.439 00:02:32.439 libs: 00:02:32.439 00:02:32.439 drivers: 00:02:32.439 common/cpt: not in enabled drivers build config 00:02:32.439 common/dpaax: not in enabled drivers build config 00:02:32.439 common/iavf: not in enabled drivers build config 00:02:32.439 common/idpf: not in enabled drivers build config 00:02:32.439 common/ionic: not in enabled drivers build config 00:02:32.439 common/mvep: not in enabled drivers build config 00:02:32.439 common/octeontx: not in enabled drivers build config 00:02:32.439 bus/auxiliary: not in enabled drivers build config 00:02:32.439 bus/cdx: not in enabled drivers build config 00:02:32.439 bus/dpaa: not in enabled drivers build config 00:02:32.439 bus/fslmc: not in enabled drivers build config 00:02:32.439 bus/ifpga: not in enabled drivers build config 00:02:32.439 bus/platform: not in enabled drivers build config 00:02:32.439 bus/uacce: not in enabled drivers build config 00:02:32.439 bus/vmbus: not in enabled drivers build config 00:02:32.439 common/cnxk: not in enabled drivers build config 00:02:32.439 common/mlx5: not in enabled drivers build config 00:02:32.439 common/nfp: not in enabled drivers build config 00:02:32.439 common/nitrox: not in enabled drivers build config 00:02:32.439 common/qat: not in enabled drivers build config 00:02:32.439 common/sfc_efx: not in enabled drivers build config 00:02:32.439 mempool/bucket: not in enabled drivers build config 00:02:32.439 mempool/cnxk: not in enabled drivers build config 00:02:32.439 mempool/dpaa: not in enabled drivers build config 00:02:32.439 mempool/dpaa2: not in enabled drivers build config 00:02:32.439 mempool/octeontx: not in enabled drivers build config 00:02:32.439 mempool/stack: not in enabled drivers build config 00:02:32.439 dma/cnxk: not in enabled drivers build config 00:02:32.439 dma/dpaa: not in enabled drivers build config 00:02:32.439 dma/dpaa2: not in enabled drivers build config 00:02:32.439 dma/hisilicon: not in enabled drivers build config 00:02:32.439 dma/idxd: not in enabled drivers build config 00:02:32.439 dma/ioat: not in enabled drivers build config 00:02:32.439 dma/odm: not in enabled drivers build config 00:02:32.439 dma/skeleton: not in enabled drivers build config 00:02:32.439 net/af_packet: not in enabled drivers build config 00:02:32.439 net/af_xdp: not in enabled drivers build config 00:02:32.439 net/ark: not in enabled drivers build config 00:02:32.439 net/atlantic: not in enabled drivers build config 00:02:32.439 net/avp: not in enabled drivers build config 00:02:32.439 net/axgbe: not in enabled drivers build config 00:02:32.439 net/bnx2x: not in enabled drivers build config 00:02:32.439 net/bnxt: not in enabled drivers build config 00:02:32.439 net/bonding: not in enabled drivers build config 00:02:32.439 net/cnxk: not in enabled drivers build config 00:02:32.439 net/cpfl: not in enabled drivers build config 00:02:32.439 net/cxgbe: not in enabled drivers build config 00:02:32.439 net/dpaa: not in enabled drivers build config 00:02:32.439 net/dpaa2: not in enabled drivers build config 00:02:32.439 net/e1000: not in enabled drivers build config 00:02:32.439 net/ena: not in enabled drivers build config 00:02:32.439 net/enetc: not in enabled drivers build config 00:02:32.439 net/enetfec: not in enabled drivers build config 00:02:32.439 net/enic: not in enabled drivers build config 00:02:32.439 net/failsafe: not in enabled drivers build config 00:02:32.439 net/fm10k: not in enabled drivers build config 00:02:32.439 net/gve: not in enabled drivers build config 00:02:32.439 net/hinic: not in enabled drivers build config 00:02:32.439 net/hns3: not in enabled drivers build config 00:02:32.439 net/iavf: not in enabled drivers build config 00:02:32.439 net/ice: not in enabled drivers build config 00:02:32.439 net/idpf: not in enabled drivers build config 00:02:32.439 net/igc: not in enabled drivers build config 00:02:32.439 net/ionic: not in enabled drivers build config 00:02:32.439 net/ipn3ke: not in enabled drivers build config 00:02:32.439 net/ixgbe: not in enabled drivers build config 00:02:32.439 net/mana: not in enabled drivers build config 00:02:32.439 net/memif: not in enabled drivers build config 00:02:32.439 net/mlx4: not in enabled drivers build config 00:02:32.439 net/mlx5: not in enabled drivers build config 00:02:32.439 net/mvneta: not in enabled drivers build config 00:02:32.439 net/mvpp2: not in enabled drivers build config 00:02:32.439 net/netvsc: not in enabled drivers build config 00:02:32.439 net/nfb: not in enabled drivers build config 00:02:32.439 net/nfp: not in enabled drivers build config 00:02:32.439 net/ngbe: not in enabled drivers build config 00:02:32.439 net/ntnic: not in enabled drivers build config 00:02:32.439 net/null: not in enabled drivers build config 00:02:32.439 net/octeontx: not in enabled drivers build config 00:02:32.439 net/octeon_ep: not in enabled drivers build config 00:02:32.439 net/pcap: not in enabled drivers build config 00:02:32.439 net/pfe: not in enabled drivers build config 00:02:32.439 net/qede: not in enabled drivers build config 00:02:32.439 net/ring: not in enabled drivers build config 00:02:32.439 net/sfc: not in enabled drivers build config 00:02:32.439 net/softnic: not in enabled drivers build config 00:02:32.439 net/tap: not in enabled drivers build config 00:02:32.439 net/thunderx: not in enabled drivers build config 00:02:32.439 net/txgbe: not in enabled drivers build config 00:02:32.439 net/vdev_netvsc: not in enabled drivers build config 00:02:32.439 net/vhost: not in enabled drivers build config 00:02:32.439 net/virtio: not in enabled drivers build config 00:02:32.439 net/vmxnet3: not in enabled drivers build config 00:02:32.439 raw/cnxk_bphy: not in enabled drivers build config 00:02:32.439 raw/cnxk_gpio: not in enabled drivers build config 00:02:32.439 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:32.439 raw/ifpga: not in enabled drivers build config 00:02:32.439 raw/ntb: not in enabled drivers build config 00:02:32.439 raw/skeleton: not in enabled drivers build config 00:02:32.439 crypto/armv8: not in enabled drivers build config 00:02:32.439 crypto/bcmfs: not in enabled drivers build config 00:02:32.439 crypto/caam_jr: not in enabled drivers build config 00:02:32.439 crypto/ccp: not in enabled drivers build config 00:02:32.439 crypto/cnxk: not in enabled drivers build config 00:02:32.439 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.439 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.439 crypto/ionic: not in enabled drivers build config 00:02:32.439 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.439 crypto/mlx5: not in enabled drivers build config 00:02:32.439 crypto/mvsam: not in enabled drivers build config 00:02:32.439 crypto/nitrox: not in enabled drivers build config 00:02:32.439 crypto/null: not in enabled drivers build config 00:02:32.439 crypto/octeontx: not in enabled drivers build config 00:02:32.439 crypto/openssl: not in enabled drivers build config 00:02:32.439 crypto/scheduler: not in enabled drivers build config 00:02:32.439 crypto/uadk: not in enabled drivers build config 00:02:32.439 crypto/virtio: not in enabled drivers build config 00:02:32.439 compress/isal: not in enabled drivers build config 00:02:32.439 compress/mlx5: not in enabled drivers build config 00:02:32.439 compress/nitrox: not in enabled drivers build config 00:02:32.439 compress/octeontx: not in enabled drivers build config 00:02:32.439 compress/uadk: not in enabled drivers build config 00:02:32.439 compress/zlib: not in enabled drivers build config 00:02:32.439 regex/mlx5: not in enabled drivers build config 00:02:32.439 regex/cn9k: not in enabled drivers build config 00:02:32.439 ml/cnxk: not in enabled drivers build config 00:02:32.439 vdpa/ifc: not in enabled drivers build config 00:02:32.439 vdpa/mlx5: not in enabled drivers build config 00:02:32.439 vdpa/nfp: not in enabled drivers build config 00:02:32.439 vdpa/sfc: not in enabled drivers build config 00:02:32.439 event/cnxk: not in enabled drivers build config 00:02:32.439 event/dlb2: not in enabled drivers build config 00:02:32.439 event/dpaa: not in enabled drivers build config 00:02:32.439 event/dpaa2: not in enabled drivers build config 00:02:32.439 event/dsw: not in enabled drivers build config 00:02:32.439 event/opdl: not in enabled drivers build config 00:02:32.439 event/skeleton: not in enabled drivers build config 00:02:32.439 event/sw: not in enabled drivers build config 00:02:32.439 event/octeontx: not in enabled drivers build config 00:02:32.439 baseband/acc: not in enabled drivers build config 00:02:32.439 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:32.439 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:32.439 baseband/la12xx: not in enabled drivers build config 00:02:32.439 baseband/null: not in enabled drivers build config 00:02:32.440 baseband/turbo_sw: not in enabled drivers build config 00:02:32.440 gpu/cuda: not in enabled drivers build config 00:02:32.440 00:02:32.440 00:02:32.440 Build targets in project: 224 00:02:32.440 00:02:32.440 DPDK 24.07.0-rc3 00:02:32.440 00:02:32.440 User defined options 00:02:32.440 libdir : lib 00:02:32.440 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:32.440 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:32.440 c_link_args : 00:02:32.440 enable_docs : false 00:02:32.440 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:32.440 enable_kmods : false 00:02:32.440 machine : native 00:02:32.440 tests : false 00:02:32.440 00:02:32.440 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.440 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:32.440 01:45:47 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:32.440 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:32.440 [1/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:32.698 [2/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:32.698 [3/723] Linking static target lib/librte_kvargs.a 00:02:32.698 [4/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:32.698 [5/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:32.698 [6/723] Linking static target lib/librte_log.a 00:02:32.955 [7/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:32.955 [8/723] Linking static target lib/librte_argparse.a 00:02:32.955 [9/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.955 [10/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.214 [11/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.214 [12/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.214 [13/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:33.214 [14/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.214 [15/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.214 [16/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.214 [17/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.214 [18/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:33.214 [19/723] Linking target lib/librte_log.so.24.2 00:02:33.472 [20/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.730 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.730 [22/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:33.730 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.730 [24/723] Linking target lib/librte_kvargs.so.24.2 00:02:33.730 [25/723] Linking target lib/librte_argparse.so.24.2 00:02:33.730 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.730 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.730 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.730 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.988 [30/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:33.988 [31/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.988 [32/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.988 [33/723] Linking static target lib/librte_telemetry.a 00:02:33.988 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:33.988 [35/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:34.246 [36/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.246 [37/723] Linking target lib/librte_telemetry.so.24.2 00:02:34.505 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.505 [39/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.505 [40/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.505 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.505 [42/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:34.505 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.505 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.505 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.505 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.505 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.505 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.763 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.022 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.022 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.022 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.281 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.281 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.281 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.281 [56/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:35.281 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.542 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.542 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.542 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.801 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.801 [62/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.801 [63/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.801 [64/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.801 [65/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.801 [66/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.801 [67/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:35.801 [68/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.060 [69/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.060 [70/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.060 [71/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.318 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.318 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.577 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:36.577 [75/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.577 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.577 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.577 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:36.577 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.577 [80/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.835 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:36.835 [82/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.835 [83/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.093 [84/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.093 [85/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.093 [86/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.093 [87/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:37.093 [88/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.352 [89/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.352 [90/723] Linking static target lib/librte_ring.a 00:02:37.610 [91/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.610 [92/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.610 [93/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.610 [94/723] Linking static target lib/librte_eal.a 00:02:37.610 [95/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.869 [96/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.869 [97/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.869 [98/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.869 [99/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.869 [100/723] Linking static target lib/librte_mempool.a 00:02:38.128 [101/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.128 [102/723] Linking static target lib/librte_rcu.a 00:02:38.128 [103/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.128 [104/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.386 [105/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.386 [106/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.386 [107/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.386 [108/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.386 [109/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.644 [110/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.644 [111/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.644 [112/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.644 [113/723] Linking static target lib/librte_mbuf.a 00:02:38.903 [114/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.903 [115/723] Linking static target lib/librte_net.a 00:02:38.903 [116/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.903 [117/723] Linking static target lib/librte_meter.a 00:02:39.161 [118/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.161 [119/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.161 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.161 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.161 [122/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.161 [123/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.419 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.985 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.985 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.243 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.243 [128/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.243 [129/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.243 [130/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.501 [131/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.501 [132/723] Linking static target lib/librte_pci.a 00:02:40.501 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.501 [134/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.501 [135/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.501 [136/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.759 [137/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.759 [138/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.759 [139/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.759 [140/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.759 [141/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.759 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:40.759 [143/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.017 [144/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.017 [145/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.017 [146/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.017 [147/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.017 [148/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.275 [149/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.275 [150/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.275 [151/723] Linking static target lib/librte_cmdline.a 00:02:41.533 [152/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.533 [153/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:41.533 [154/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:41.533 [155/723] Linking static target lib/librte_metrics.a 00:02:41.533 [156/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:41.790 [157/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.048 [158/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.048 [159/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:42.048 [160/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.306 [161/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.564 [162/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.564 [163/723] Linking static target lib/librte_timer.a 00:02:42.822 [164/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.822 [165/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.080 [166/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.080 [167/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:43.349 [168/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.629 [169/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.629 [170/723] Linking static target lib/librte_ethdev.a 00:02:43.886 [171/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:43.886 [172/723] Linking static target lib/librte_bitratestats.a 00:02:43.886 [173/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:43.886 [174/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.144 [175/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:44.144 [176/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.144 [177/723] Linking static target lib/librte_bbdev.a 00:02:44.144 [178/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.144 [179/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.144 [180/723] Linking static target lib/librte_hash.a 00:02:44.144 [181/723] Linking target lib/librte_eal.so.24.2 00:02:44.144 [182/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:44.403 [183/723] Linking target lib/librte_ring.so.24.2 00:02:44.403 [184/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:44.403 [185/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:44.403 [186/723] Linking target lib/librte_meter.so.24.2 00:02:44.403 [187/723] Linking target lib/librte_rcu.so.24.2 00:02:44.403 [188/723] Linking target lib/librte_mempool.so.24.2 00:02:44.661 [189/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.661 [190/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:44.661 [191/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.661 [192/723] Linking target lib/librte_pci.so.24.2 00:02:44.661 [193/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:44.661 [194/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:44.661 [195/723] Linking static target lib/acl/libavx2_tmp.a 00:02:44.661 [196/723] Linking target lib/librte_timer.so.24.2 00:02:44.661 [197/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:44.661 [198/723] Linking target lib/librte_mbuf.so.24.2 00:02:44.661 [199/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:44.661 [200/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.661 [201/723] Linking static target lib/acl/libavx512_tmp.a 00:02:44.661 [202/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:44.919 [203/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:44.919 [204/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:44.919 [205/723] Linking target lib/librte_net.so.24.2 00:02:44.919 [206/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:44.919 [207/723] Linking target lib/librte_bbdev.so.24.2 00:02:44.919 [208/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:44.919 [209/723] Linking static target lib/librte_acl.a 00:02:44.919 [210/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:44.919 [211/723] Linking target lib/librte_cmdline.so.24.2 00:02:45.177 [212/723] Linking target lib/librte_hash.so.24.2 00:02:45.177 [213/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:45.177 [214/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.177 [215/723] Linking static target lib/librte_cfgfile.a 00:02:45.177 [216/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.177 [217/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.434 [218/723] Linking target lib/librte_acl.so.24.2 00:02:45.434 [219/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.434 [220/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:45.692 [221/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.692 [222/723] Linking target lib/librte_cfgfile.so.24.2 00:02:45.692 [223/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:45.692 [224/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.949 [225/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.949 [226/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:45.949 [227/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.207 [228/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.207 [229/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:46.207 [230/723] Linking static target lib/librte_bpf.a 00:02:46.207 [231/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.207 [232/723] Linking static target lib/librte_compressdev.a 00:02:46.467 [233/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.467 [234/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.725 [235/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.726 [236/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.726 [237/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.726 [238/723] Linking static target lib/librte_distributor.a 00:02:46.726 [239/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.726 [240/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.726 [241/723] Linking target lib/librte_compressdev.so.24.2 00:02:46.984 [242/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.984 [243/723] Linking target lib/librte_distributor.so.24.2 00:02:46.984 [244/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.984 [245/723] Linking static target lib/librte_dmadev.a 00:02:47.242 [246/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:47.501 [247/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.501 [248/723] Linking target lib/librte_dmadev.so.24.2 00:02:47.501 [249/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:47.759 [250/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:48.017 [251/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.017 [252/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:48.017 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:48.017 [254/723] Linking static target lib/librte_efd.a 00:02:48.274 [255/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.274 [256/723] Linking static target lib/librte_cryptodev.a 00:02:48.274 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.274 [258/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.530 [259/723] Linking target lib/librte_efd.so.24.2 00:02:48.787 [260/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.787 [261/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.787 [262/723] Linking target lib/librte_ethdev.so.24.2 00:02:48.787 [263/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:48.787 [264/723] Linking static target lib/librte_dispatcher.a 00:02:48.787 [265/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:48.787 [266/723] Linking static target lib/librte_gpudev.a 00:02:49.045 [267/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:49.045 [268/723] Linking target lib/librte_metrics.so.24.2 00:02:49.045 [269/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.045 [270/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:49.045 [271/723] Linking target lib/librte_bpf.so.24.2 00:02:49.045 [272/723] Linking target lib/librte_bitratestats.so.24.2 00:02:49.045 [273/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.303 [274/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.303 [275/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:49.303 [276/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.560 [277/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:49.560 [278/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.560 [279/723] Linking target lib/librte_cryptodev.so.24.2 00:02:49.560 [280/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.560 [281/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:49.819 [282/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.819 [283/723] Linking target lib/librte_gpudev.so.24.2 00:02:49.819 [284/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.077 [285/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:50.077 [286/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:50.077 [287/723] Linking static target lib/librte_eventdev.a 00:02:50.077 [288/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.077 [289/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:50.077 [290/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:50.077 [291/723] Linking static target lib/librte_gro.a 00:02:50.077 [292/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:50.335 [293/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.335 [294/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.335 [295/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.335 [296/723] Linking target lib/librte_gro.so.24.2 00:02:50.335 [297/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.335 [298/723] Linking static target lib/librte_gso.a 00:02:50.594 [299/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.594 [300/723] Linking target lib/librte_gso.so.24.2 00:02:50.853 [301/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:50.853 [302/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:50.853 [303/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.853 [304/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.853 [305/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:51.111 [306/723] Linking static target lib/librte_jobstats.a 00:02:51.111 [307/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:51.111 [308/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.111 [309/723] Linking static target lib/librte_ip_frag.a 00:02:51.111 [310/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.111 [311/723] Linking static target lib/librte_latencystats.a 00:02:51.369 [312/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.369 [313/723] Linking target lib/librte_jobstats.so.24.2 00:02:51.369 [314/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.369 [315/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.369 [316/723] Linking target lib/librte_latencystats.so.24.2 00:02:51.369 [317/723] Linking target lib/librte_ip_frag.so.24.2 00:02:51.369 [318/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:51.369 [319/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:51.628 [320/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:51.628 [321/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.628 [322/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:51.628 [323/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.628 [324/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.886 [325/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.144 [326/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.144 [327/723] Linking target lib/librte_eventdev.so.24.2 00:02:52.144 [328/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.144 [329/723] Linking static target lib/librte_lpm.a 00:02:52.144 [330/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:52.144 [331/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.144 [332/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:52.402 [333/723] Linking target lib/librte_dispatcher.so.24.2 00:02:52.402 [334/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.402 [335/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.402 [336/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:52.402 [337/723] Linking static target lib/librte_pcapng.a 00:02:52.402 [338/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.402 [339/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:52.402 [340/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.660 [341/723] Linking target lib/librte_lpm.so.24.2 00:02:52.660 [342/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:52.660 [343/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.660 [344/723] Linking target lib/librte_pcapng.so.24.2 00:02:52.918 [345/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.918 [346/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:52.918 [347/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.177 [348/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.177 [349/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:53.177 [350/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.177 [351/723] Linking static target lib/librte_power.a 00:02:53.177 [352/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.177 [353/723] Linking static target lib/librte_rawdev.a 00:02:53.177 [354/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.177 [355/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.435 [356/723] Linking static target lib/librte_regexdev.a 00:02:53.435 [357/723] Linking static target lib/librte_member.a 00:02:53.435 [358/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:53.435 [359/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:53.435 [360/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:53.693 [361/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.693 [362/723] Linking target lib/librte_member.so.24.2 00:02:53.693 [363/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.693 [364/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:53.693 [365/723] Linking static target lib/librte_mldev.a 00:02:53.693 [366/723] Linking target lib/librte_rawdev.so.24.2 00:02:53.951 [367/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.951 [368/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.951 [369/723] Linking target lib/librte_power.so.24.2 00:02:53.951 [370/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.951 [371/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.951 [372/723] Linking target lib/librte_regexdev.so.24.2 00:02:54.209 [373/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.209 [374/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.467 [375/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.467 [376/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.467 [377/723] Linking static target lib/librte_rib.a 00:02:54.467 [378/723] Linking static target lib/librte_reorder.a 00:02:54.467 [379/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.467 [380/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.726 [381/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.726 [382/723] Linking static target lib/librte_stack.a 00:02:54.726 [383/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.726 [384/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.726 [385/723] Linking target lib/librte_reorder.so.24.2 00:02:54.726 [386/723] Linking static target lib/librte_security.a 00:02:54.726 [387/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.726 [388/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.726 [389/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:54.984 [390/723] Linking target lib/librte_rib.so.24.2 00:02:54.984 [391/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.984 [392/723] Linking target lib/librte_stack.so.24.2 00:02:54.984 [393/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:55.242 [394/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.243 [395/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.243 [396/723] Linking target lib/librte_security.so.24.2 00:02:55.243 [397/723] Linking target lib/librte_mldev.so.24.2 00:02:55.243 [398/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.243 [399/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:55.243 [400/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.501 [401/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.501 [402/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.501 [403/723] Linking static target lib/librte_sched.a 00:02:56.072 [404/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.072 [405/723] Linking target lib/librte_sched.so.24.2 00:02:56.072 [406/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.072 [407/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:56.072 [408/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.330 [409/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:56.330 [410/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.587 [411/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:56.587 [412/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.844 [413/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:57.102 [414/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:57.102 [415/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:57.102 [416/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:57.102 [417/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:57.359 [418/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:57.359 [419/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:57.359 [420/723] Linking static target lib/librte_ipsec.a 00:02:57.359 [421/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:57.617 [422/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.875 [423/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:57.875 [424/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:57.875 [425/723] Linking target lib/librte_ipsec.so.24.2 00:02:57.875 [426/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:57.875 [427/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:57.875 [428/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:57.875 [429/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:57.875 [430/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:57.875 [431/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:57.875 [432/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:58.811 [433/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:58.811 [434/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:58.811 [435/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:58.811 [436/723] Linking static target lib/librte_pdcp.a 00:02:58.811 [437/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:58.811 [438/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:58.811 [439/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:58.811 [440/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:58.811 [441/723] Linking static target lib/librte_fib.a 00:02:59.069 [442/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.328 [443/723] Linking target lib/librte_pdcp.so.24.2 00:02:59.328 [444/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.328 [445/723] Linking target lib/librte_fib.so.24.2 00:02:59.586 [446/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:59.845 [447/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:59.845 [448/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:00.104 [449/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:00.104 [450/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:00.104 [451/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:00.104 [452/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:00.104 [453/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:00.362 [454/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.621 [455/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.621 [456/723] Linking static target lib/librte_port.a 00:03:00.879 [457/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:00.879 [458/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:00.879 [459/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:00.879 [460/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:01.138 [461/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.138 [462/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:01.138 [463/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.138 [464/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:01.138 [465/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:01.138 [466/723] Linking target lib/librte_port.so.24.2 00:03:01.138 [467/723] Linking static target lib/librte_pdump.a 00:03:01.397 [468/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:03:01.397 [469/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.397 [470/723] Linking target lib/librte_pdump.so.24.2 00:03:01.397 [471/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:01.397 [472/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:01.964 [473/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:01.964 [474/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:01.964 [475/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:01.964 [476/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:01.964 [477/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:02.223 [478/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:02.481 [479/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:02.481 [480/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:02.481 [481/723] Linking static target lib/librte_table.a 00:03:02.481 [482/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:02.739 [483/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:02.997 [484/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.254 [485/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:03.255 [486/723] Linking target lib/librte_table.so.24.2 00:03:03.255 [487/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:03:03.255 [488/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:03.512 [489/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:03.769 [490/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:03.769 [491/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:04.027 [492/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:04.027 [493/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:04.027 [494/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:04.027 [495/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:04.284 [496/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:04.542 [497/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:04.542 [498/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.799 [499/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:04.799 [500/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:04.799 [501/723] Linking static target lib/librte_graph.a 00:03:05.057 [502/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:05.057 [503/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:05.314 [504/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:05.571 [505/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.571 [506/723] Linking target lib/librte_graph.so.24.2 00:03:05.571 [507/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.571 [508/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:03:05.571 [509/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:05.828 [510/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:06.117 [511/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:06.117 [512/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:06.117 [513/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:06.117 [514/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:06.117 [515/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.375 [516/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:06.633 [517/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:06.633 [518/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:06.891 [519/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.891 [520/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.891 [521/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.891 [522/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:07.148 [523/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:07.148 [524/723] Linking static target lib/librte_node.a 00:03:07.148 [525/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:07.406 [526/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:07.406 [527/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.406 [528/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:07.406 [529/723] Linking target lib/librte_node.so.24.2 00:03:07.406 [530/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:07.406 [531/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:07.665 [532/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:07.665 [533/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.665 [534/723] Linking static target drivers/librte_bus_vdev.a 00:03:07.665 [535/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:07.665 [536/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.665 [537/723] Linking static target drivers/librte_bus_pci.a 00:03:07.923 [538/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.923 [539/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.923 [540/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.923 [541/723] Linking target drivers/librte_bus_vdev.so.24.2 00:03:07.923 [542/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:08.181 [543/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:08.181 [544/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:03:08.181 [545/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:08.181 [546/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.181 [547/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.181 [548/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:08.181 [549/723] Linking target drivers/librte_bus_pci.so.24.2 00:03:08.439 [550/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:03:08.439 [551/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:08.439 [552/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.439 [553/723] Linking static target drivers/librte_mempool_ring.a 00:03:08.439 [554/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.439 [555/723] Linking target drivers/librte_mempool_ring.so.24.2 00:03:08.696 [556/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:08.953 [557/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:09.518 [558/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:09.518 [559/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:09.518 [560/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:09.775 [561/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:10.339 [562/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:10.340 [563/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:10.340 [564/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:10.340 [565/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:10.340 [566/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:10.904 [567/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:10.904 [568/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:11.161 [569/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:11.161 [570/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:11.161 [571/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:11.419 [572/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:11.419 [573/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:11.984 [574/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:11.984 [575/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:11.984 [576/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:12.242 [577/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:12.807 [578/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:12.807 [579/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:12.807 [580/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:12.807 [581/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:12.807 [582/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:12.807 [583/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:12.807 [584/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:13.064 [585/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.064 [586/723] Linking static target lib/librte_vhost.a 00:03:13.322 [587/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:13.322 [588/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:13.322 [589/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:13.580 [590/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:13.580 [591/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:13.580 [592/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:13.580 [593/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:13.580 [594/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:14.146 [595/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:14.146 [596/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:14.146 [597/723] Linking static target drivers/librte_net_i40e.a 00:03:14.146 [598/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:14.146 [599/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:14.146 [600/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:14.146 [601/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:14.146 [602/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:14.404 [603/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:14.404 [604/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.404 [605/723] Linking target lib/librte_vhost.so.24.2 00:03:14.662 [606/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:14.662 [607/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:14.920 [608/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.920 [609/723] Linking target drivers/librte_net_i40e.so.24.2 00:03:14.920 [610/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:14.920 [611/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:15.486 [612/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:15.486 [613/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:15.486 [614/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:15.744 [615/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:15.744 [616/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:16.002 [617/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:16.002 [618/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:16.002 [619/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:16.260 [620/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:16.518 [621/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:16.518 [622/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:16.518 [623/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:16.518 [624/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:16.776 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:16.776 [626/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:16.776 [627/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:16.776 [628/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:17.034 [629/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:17.293 [630/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:17.575 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:17.575 [632/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:17.575 [633/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:17.842 [634/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:17.842 [635/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:18.407 [636/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:18.665 [637/723] Linking static target lib/librte_pipeline.a 00:03:18.665 [638/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:18.665 [639/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:18.665 [640/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:18.922 [641/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:18.922 [642/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:18.922 [643/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:19.180 [644/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:19.180 [645/723] Linking target app/dpdk-dumpcap 00:03:19.180 [646/723] Linking target app/dpdk-graph 00:03:19.180 [647/723] Linking target app/dpdk-pdump 00:03:19.438 [648/723] Linking target app/dpdk-test-acl 00:03:19.438 [649/723] Linking target app/dpdk-proc-info 00:03:19.438 [650/723] Linking target app/dpdk-test-cmdline 00:03:19.696 [651/723] Linking target app/dpdk-test-crypto-perf 00:03:19.696 [652/723] Linking target app/dpdk-test-compress-perf 00:03:19.696 [653/723] Linking target app/dpdk-test-dma-perf 00:03:19.696 [654/723] Linking target app/dpdk-test-fib 00:03:19.696 [655/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:19.953 [656/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:19.953 [657/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:20.211 [658/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:20.211 [659/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:20.211 [660/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:20.469 [661/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:20.727 [662/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:20.727 [663/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:20.727 [664/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:20.727 [665/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:20.727 [666/723] Linking target app/dpdk-test-gpudev 00:03:20.985 [667/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:20.986 [668/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:20.986 [669/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:21.244 [670/723] Linking target app/dpdk-test-eventdev 00:03:21.244 [671/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:21.244 [672/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:21.244 [673/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.502 [674/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:21.502 [675/723] Linking target app/dpdk-test-bbdev 00:03:21.502 [676/723] Linking target lib/librte_pipeline.so.24.2 00:03:21.502 [677/723] Linking target app/dpdk-test-flow-perf 00:03:21.502 [678/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:21.760 [679/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:22.018 [680/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:22.018 [681/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:22.018 [682/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:22.018 [683/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:22.018 [684/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:22.018 [685/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:22.276 [686/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:22.534 [687/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:22.793 [688/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:22.793 [689/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:22.793 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:23.051 [691/723] Linking target app/dpdk-test-pipeline 00:03:23.051 [692/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:23.051 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:23.309 [694/723] Linking target app/dpdk-test-mldev 00:03:23.567 [695/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:23.825 [696/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:23.825 [697/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:23.825 [698/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:23.825 [699/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:24.083 [700/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:24.083 [701/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:24.341 [702/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:24.600 [703/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:24.600 [704/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:24.600 [705/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:24.858 [706/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:25.117 [707/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:25.375 [708/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:25.633 [709/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:25.633 [710/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:25.633 [711/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:25.633 [712/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:25.633 [713/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:25.892 [714/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:26.151 [715/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:26.151 [716/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:26.151 [717/723] Linking target app/dpdk-test-sad 00:03:26.151 [718/723] Linking target app/dpdk-test-regex 00:03:26.151 [719/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:26.410 [720/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:26.669 [721/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.927 [722/723] Linking target app/dpdk-testpmd 00:03:27.196 [723/723] Linking target app/dpdk-test-security-perf 00:03:27.196 01:46:42 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:27.196 01:46:42 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:27.196 01:46:42 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:27.196 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:27.196 [0/1] Installing files. 00:03:27.458 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:27.458 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.459 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.460 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.461 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.462 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.721 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.721 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.721 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.721 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.721 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.721 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.721 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.721 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.722 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:27.983 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:27.983 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:27.983 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.983 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:27.983 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.983 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.987 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:27.987 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:27.987 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:27.987 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:27.987 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:27.987 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:27.987 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:27.987 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:27.987 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:27.987 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:27.987 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:27.987 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:27.987 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:27.987 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:27.987 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:27.987 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:27.987 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:27.987 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:27.987 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:27.987 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:27.987 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:27.987 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:27.987 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:27.987 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:27.987 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:27.987 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:27.987 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:27.987 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:27.987 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:27.987 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:27.987 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:27.987 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:27.987 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:27.987 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:27.987 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:27.987 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:27.987 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:27.987 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:27.987 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:27.987 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:27.987 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:27.987 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:27.987 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:27.987 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:27.987 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:27.987 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:27.987 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:27.987 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:27.987 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:27.987 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:27.987 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:27.987 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:27.987 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:27.987 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:27.987 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:27.987 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:27.987 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:27.987 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:27.987 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:27.987 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:27.987 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:27.987 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:27.987 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:27.987 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:27.987 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:27.987 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:27.987 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:27.987 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:27.987 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:27.987 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:27.987 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:27.987 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:27.987 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:27.987 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:27.987 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:27.987 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:27.987 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:27.987 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:27.987 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:27.987 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:27.987 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:27.987 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:27.987 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:27.987 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:27.987 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:27.987 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:27.987 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:27.987 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:27.987 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:27.987 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:27.987 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:27.987 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:27.987 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:27.987 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:27.987 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:27.987 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:27.987 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:27.987 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:27.987 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:27.988 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:27.988 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:27.988 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:27.988 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:27.988 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:27.988 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:27.988 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:27.988 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:27.988 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:27.988 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:27.988 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:27.988 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:27.988 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:27.988 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:27.988 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:27.988 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:27.988 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:27.988 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:27.988 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:27.988 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:27.988 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:27.988 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:27.988 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:27.988 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:27.988 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:27.988 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:27.988 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:27.988 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:27.988 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:27.988 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:27.988 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:27.988 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:27.988 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:27.988 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:27.988 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:27.988 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:27.988 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:27.988 01:46:43 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:27.988 01:46:43 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.988 00:03:27.988 real 1m2.450s 00:03:27.988 user 7m47.386s 00:03:27.988 sys 1m4.673s 00:03:27.988 01:46:43 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:27.988 ************************************ 00:03:27.988 END TEST build_native_dpdk 00:03:27.988 01:46:43 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:27.988 ************************************ 00:03:27.988 01:46:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.988 01:46:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.988 01:46:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.988 01:46:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.988 01:46:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.988 01:46:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.988 01:46:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.988 01:46:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:28.245 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:28.245 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:28.245 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:28.245 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.810 Using 'verbs' RDMA provider 00:03:41.946 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:56.823 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:56.823 Creating mk/config.mk...done. 00:03:56.823 Creating mk/cc.flags.mk...done. 00:03:56.823 Type 'make' to build. 00:03:56.823 01:47:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:56.823 01:47:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:56.823 01:47:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:56.823 01:47:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.823 ************************************ 00:03:56.823 START TEST make 00:03:56.823 ************************************ 00:03:56.823 01:47:09 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:56.823 make[1]: Nothing to be done for 'all'. 00:04:18.781 CC lib/ut/ut.o 00:04:18.781 CC lib/log/log.o 00:04:18.781 CC lib/log/log_flags.o 00:04:18.781 CC lib/ut_mock/mock.o 00:04:18.781 CC lib/log/log_deprecated.o 00:04:19.038 LIB libspdk_ut.a 00:04:19.038 LIB libspdk_ut_mock.a 00:04:19.038 SO libspdk_ut.so.2.0 00:04:19.038 SO libspdk_ut_mock.so.6.0 00:04:19.038 LIB libspdk_log.a 00:04:19.038 SYMLINK libspdk_ut.so 00:04:19.038 SO libspdk_log.so.7.0 00:04:19.038 SYMLINK libspdk_ut_mock.so 00:04:19.038 SYMLINK libspdk_log.so 00:04:19.295 CC lib/ioat/ioat.o 00:04:19.295 CXX lib/trace_parser/trace.o 00:04:19.295 CC lib/util/base64.o 00:04:19.295 CC lib/util/bit_array.o 00:04:19.295 CC lib/util/cpuset.o 00:04:19.295 CC lib/util/crc32.o 00:04:19.295 CC lib/util/crc16.o 00:04:19.295 CC lib/util/crc32c.o 00:04:19.295 CC lib/dma/dma.o 00:04:19.553 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.553 CC lib/util/crc32_ieee.o 00:04:19.553 CC lib/util/crc64.o 00:04:19.553 CC lib/util/dif.o 00:04:19.553 CC lib/util/fd.o 00:04:19.553 LIB libspdk_dma.a 00:04:19.553 CC lib/util/fd_group.o 00:04:19.553 SO libspdk_dma.so.4.0 00:04:19.553 CC lib/vfio_user/host/vfio_user.o 00:04:19.553 CC lib/util/file.o 00:04:19.553 LIB libspdk_ioat.a 00:04:19.553 SYMLINK libspdk_dma.so 00:04:19.553 CC lib/util/hexlify.o 00:04:19.553 SO libspdk_ioat.so.7.0 00:04:19.812 CC lib/util/iov.o 00:04:19.812 CC lib/util/math.o 00:04:19.812 SYMLINK libspdk_ioat.so 00:04:19.812 CC lib/util/net.o 00:04:19.812 CC lib/util/pipe.o 00:04:19.812 CC lib/util/strerror_tls.o 00:04:19.812 CC lib/util/string.o 00:04:19.812 CC lib/util/uuid.o 00:04:19.812 LIB libspdk_vfio_user.a 00:04:19.812 SO libspdk_vfio_user.so.5.0 00:04:19.812 CC lib/util/xor.o 00:04:19.812 CC lib/util/zipf.o 00:04:19.812 SYMLINK libspdk_vfio_user.so 00:04:20.070 LIB libspdk_util.a 00:04:20.070 SO libspdk_util.so.10.0 00:04:20.328 SYMLINK libspdk_util.so 00:04:20.328 LIB libspdk_trace_parser.a 00:04:20.328 SO libspdk_trace_parser.so.5.0 00:04:20.587 CC lib/rdma_provider/common.o 00:04:20.587 CC lib/conf/conf.o 00:04:20.587 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:20.587 CC lib/rdma_utils/rdma_utils.o 00:04:20.587 CC lib/json/json_parse.o 00:04:20.587 CC lib/env_dpdk/env.o 00:04:20.587 CC lib/json/json_util.o 00:04:20.587 SYMLINK libspdk_trace_parser.so 00:04:20.587 CC lib/idxd/idxd.o 00:04:20.587 CC lib/vmd/vmd.o 00:04:20.587 CC lib/vmd/led.o 00:04:20.587 CC lib/idxd/idxd_user.o 00:04:20.845 LIB libspdk_rdma_provider.a 00:04:20.845 CC lib/idxd/idxd_kernel.o 00:04:20.845 LIB libspdk_conf.a 00:04:20.845 SO libspdk_rdma_provider.so.6.0 00:04:20.845 CC lib/json/json_write.o 00:04:20.845 CC lib/env_dpdk/memory.o 00:04:20.845 SO libspdk_conf.so.6.0 00:04:20.845 LIB libspdk_rdma_utils.a 00:04:20.845 SYMLINK libspdk_rdma_provider.so 00:04:20.845 SO libspdk_rdma_utils.so.1.0 00:04:20.845 CC lib/env_dpdk/pci.o 00:04:20.845 SYMLINK libspdk_conf.so 00:04:20.845 CC lib/env_dpdk/init.o 00:04:20.845 SYMLINK libspdk_rdma_utils.so 00:04:20.845 CC lib/env_dpdk/threads.o 00:04:20.845 CC lib/env_dpdk/pci_ioat.o 00:04:20.845 CC lib/env_dpdk/pci_virtio.o 00:04:21.103 CC lib/env_dpdk/pci_vmd.o 00:04:21.103 CC lib/env_dpdk/pci_idxd.o 00:04:21.103 LIB libspdk_json.a 00:04:21.103 LIB libspdk_idxd.a 00:04:21.103 CC lib/env_dpdk/pci_event.o 00:04:21.103 SO libspdk_json.so.6.0 00:04:21.103 SO libspdk_idxd.so.12.0 00:04:21.103 SYMLINK libspdk_json.so 00:04:21.103 CC lib/env_dpdk/sigbus_handler.o 00:04:21.103 CC lib/env_dpdk/pci_dpdk.o 00:04:21.103 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:21.103 LIB libspdk_vmd.a 00:04:21.103 SYMLINK libspdk_idxd.so 00:04:21.103 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:21.362 SO libspdk_vmd.so.6.0 00:04:21.362 SYMLINK libspdk_vmd.so 00:04:21.362 CC lib/jsonrpc/jsonrpc_server.o 00:04:21.362 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:21.362 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:21.362 CC lib/jsonrpc/jsonrpc_client.o 00:04:21.626 LIB libspdk_jsonrpc.a 00:04:21.626 SO libspdk_jsonrpc.so.6.0 00:04:21.885 SYMLINK libspdk_jsonrpc.so 00:04:21.885 LIB libspdk_env_dpdk.a 00:04:22.143 SO libspdk_env_dpdk.so.15.0 00:04:22.143 CC lib/rpc/rpc.o 00:04:22.143 SYMLINK libspdk_env_dpdk.so 00:04:22.143 LIB libspdk_rpc.a 00:04:22.401 SO libspdk_rpc.so.6.0 00:04:22.401 SYMLINK libspdk_rpc.so 00:04:22.659 CC lib/notify/notify_rpc.o 00:04:22.659 CC lib/notify/notify.o 00:04:22.659 CC lib/trace/trace.o 00:04:22.659 CC lib/keyring/keyring.o 00:04:22.659 CC lib/trace/trace_flags.o 00:04:22.659 CC lib/keyring/keyring_rpc.o 00:04:22.659 CC lib/trace/trace_rpc.o 00:04:22.659 LIB libspdk_notify.a 00:04:22.916 SO libspdk_notify.so.6.0 00:04:22.916 LIB libspdk_keyring.a 00:04:22.916 SYMLINK libspdk_notify.so 00:04:22.916 LIB libspdk_trace.a 00:04:22.916 SO libspdk_keyring.so.1.0 00:04:22.916 SO libspdk_trace.so.10.0 00:04:22.916 SYMLINK libspdk_keyring.so 00:04:23.175 SYMLINK libspdk_trace.so 00:04:23.432 CC lib/sock/sock.o 00:04:23.432 CC lib/sock/sock_rpc.o 00:04:23.432 CC lib/thread/thread.o 00:04:23.432 CC lib/thread/iobuf.o 00:04:23.690 LIB libspdk_sock.a 00:04:23.948 SO libspdk_sock.so.10.0 00:04:23.948 SYMLINK libspdk_sock.so 00:04:24.206 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:24.206 CC lib/nvme/nvme_fabric.o 00:04:24.206 CC lib/nvme/nvme_ctrlr.o 00:04:24.206 CC lib/nvme/nvme_ns_cmd.o 00:04:24.206 CC lib/nvme/nvme_ns.o 00:04:24.206 CC lib/nvme/nvme_pcie_common.o 00:04:24.206 CC lib/nvme/nvme_qpair.o 00:04:24.206 CC lib/nvme/nvme_pcie.o 00:04:24.206 CC lib/nvme/nvme.o 00:04:24.772 LIB libspdk_thread.a 00:04:24.772 SO libspdk_thread.so.10.1 00:04:25.030 SYMLINK libspdk_thread.so 00:04:25.030 CC lib/nvme/nvme_quirks.o 00:04:25.030 CC lib/nvme/nvme_transport.o 00:04:25.030 CC lib/nvme/nvme_discovery.o 00:04:25.030 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:25.030 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:25.030 CC lib/nvme/nvme_tcp.o 00:04:25.030 CC lib/nvme/nvme_opal.o 00:04:25.289 CC lib/nvme/nvme_io_msg.o 00:04:25.289 CC lib/nvme/nvme_poll_group.o 00:04:25.547 CC lib/nvme/nvme_zns.o 00:04:25.547 CC lib/nvme/nvme_stubs.o 00:04:25.547 CC lib/nvme/nvme_auth.o 00:04:25.547 CC lib/nvme/nvme_cuse.o 00:04:25.805 CC lib/nvme/nvme_rdma.o 00:04:25.805 CC lib/accel/accel.o 00:04:25.805 CC lib/accel/accel_rpc.o 00:04:25.805 CC lib/accel/accel_sw.o 00:04:26.063 CC lib/blob/blobstore.o 00:04:26.321 CC lib/blob/request.o 00:04:26.321 CC lib/blob/zeroes.o 00:04:26.321 CC lib/init/json_config.o 00:04:26.321 CC lib/blob/blob_bs_dev.o 00:04:26.579 CC lib/init/subsystem.o 00:04:26.579 CC lib/init/subsystem_rpc.o 00:04:26.579 CC lib/init/rpc.o 00:04:26.579 CC lib/virtio/virtio.o 00:04:26.579 CC lib/virtio/virtio_vhost_user.o 00:04:26.579 CC lib/virtio/virtio_vfio_user.o 00:04:26.579 CC lib/virtio/virtio_pci.o 00:04:26.836 LIB libspdk_init.a 00:04:26.836 LIB libspdk_accel.a 00:04:26.836 SO libspdk_init.so.5.0 00:04:26.836 SO libspdk_accel.so.16.0 00:04:26.836 SYMLINK libspdk_init.so 00:04:26.836 SYMLINK libspdk_accel.so 00:04:27.095 LIB libspdk_virtio.a 00:04:27.095 SO libspdk_virtio.so.7.0 00:04:27.095 CC lib/bdev/bdev.o 00:04:27.095 CC lib/bdev/bdev_zone.o 00:04:27.095 CC lib/bdev/part.o 00:04:27.095 CC lib/bdev/bdev_rpc.o 00:04:27.095 CC lib/bdev/scsi_nvme.o 00:04:27.095 CC lib/event/app.o 00:04:27.095 CC lib/event/reactor.o 00:04:27.095 SYMLINK libspdk_virtio.so 00:04:27.095 CC lib/event/log_rpc.o 00:04:27.095 LIB libspdk_nvme.a 00:04:27.353 CC lib/event/app_rpc.o 00:04:27.353 CC lib/event/scheduler_static.o 00:04:27.353 SO libspdk_nvme.so.13.1 00:04:27.611 LIB libspdk_event.a 00:04:27.611 SO libspdk_event.so.14.0 00:04:27.611 SYMLINK libspdk_nvme.so 00:04:27.870 SYMLINK libspdk_event.so 00:04:29.242 LIB libspdk_blob.a 00:04:29.242 SO libspdk_blob.so.11.0 00:04:29.500 SYMLINK libspdk_blob.so 00:04:29.758 CC lib/blobfs/blobfs.o 00:04:29.758 CC lib/blobfs/tree.o 00:04:29.758 CC lib/lvol/lvol.o 00:04:29.758 LIB libspdk_bdev.a 00:04:29.758 SO libspdk_bdev.so.16.0 00:04:30.016 SYMLINK libspdk_bdev.so 00:04:30.016 CC lib/nbd/nbd.o 00:04:30.016 CC lib/ublk/ublk.o 00:04:30.016 CC lib/ftl/ftl_core.o 00:04:30.016 CC lib/nbd/nbd_rpc.o 00:04:30.016 CC lib/ftl/ftl_init.o 00:04:30.016 CC lib/nvmf/ctrlr.o 00:04:30.016 CC lib/ftl/ftl_layout.o 00:04:30.016 CC lib/scsi/dev.o 00:04:30.274 CC lib/scsi/lun.o 00:04:30.274 CC lib/ublk/ublk_rpc.o 00:04:30.532 CC lib/nvmf/ctrlr_discovery.o 00:04:30.532 LIB libspdk_blobfs.a 00:04:30.532 CC lib/nvmf/ctrlr_bdev.o 00:04:30.532 CC lib/scsi/port.o 00:04:30.532 SO libspdk_blobfs.so.10.0 00:04:30.532 LIB libspdk_lvol.a 00:04:30.532 LIB libspdk_nbd.a 00:04:30.532 CC lib/ftl/ftl_debug.o 00:04:30.532 SO libspdk_lvol.so.10.0 00:04:30.532 SO libspdk_nbd.so.7.0 00:04:30.532 SYMLINK libspdk_blobfs.so 00:04:30.794 CC lib/nvmf/subsystem.o 00:04:30.794 SYMLINK libspdk_lvol.so 00:04:30.794 CC lib/ftl/ftl_io.o 00:04:30.794 CC lib/nvmf/nvmf.o 00:04:30.794 SYMLINK libspdk_nbd.so 00:04:30.794 CC lib/nvmf/nvmf_rpc.o 00:04:30.794 CC lib/scsi/scsi.o 00:04:30.794 LIB libspdk_ublk.a 00:04:30.794 SO libspdk_ublk.so.3.0 00:04:30.794 CC lib/ftl/ftl_sb.o 00:04:30.794 CC lib/scsi/scsi_bdev.o 00:04:30.794 SYMLINK libspdk_ublk.so 00:04:30.794 CC lib/scsi/scsi_pr.o 00:04:31.060 CC lib/scsi/scsi_rpc.o 00:04:31.060 CC lib/ftl/ftl_l2p.o 00:04:31.060 CC lib/nvmf/transport.o 00:04:31.060 CC lib/scsi/task.o 00:04:31.060 CC lib/ftl/ftl_l2p_flat.o 00:04:31.318 CC lib/nvmf/tcp.o 00:04:31.318 CC lib/ftl/ftl_nv_cache.o 00:04:31.318 CC lib/ftl/ftl_band.o 00:04:31.318 LIB libspdk_scsi.a 00:04:31.318 CC lib/nvmf/stubs.o 00:04:31.318 SO libspdk_scsi.so.9.0 00:04:31.576 SYMLINK libspdk_scsi.so 00:04:31.576 CC lib/nvmf/mdns_server.o 00:04:31.576 CC lib/nvmf/rdma.o 00:04:31.576 CC lib/nvmf/auth.o 00:04:31.576 CC lib/ftl/ftl_band_ops.o 00:04:31.833 CC lib/ftl/ftl_writer.o 00:04:31.833 CC lib/iscsi/conn.o 00:04:31.833 CC lib/iscsi/init_grp.o 00:04:32.091 CC lib/vhost/vhost.o 00:04:32.091 CC lib/vhost/vhost_rpc.o 00:04:32.091 CC lib/vhost/vhost_scsi.o 00:04:32.091 CC lib/iscsi/iscsi.o 00:04:32.348 CC lib/ftl/ftl_rq.o 00:04:32.348 CC lib/iscsi/md5.o 00:04:32.348 CC lib/ftl/ftl_reloc.o 00:04:32.348 CC lib/iscsi/param.o 00:04:32.605 CC lib/vhost/vhost_blk.o 00:04:32.605 CC lib/iscsi/portal_grp.o 00:04:32.605 CC lib/iscsi/tgt_node.o 00:04:32.605 CC lib/iscsi/iscsi_subsystem.o 00:04:32.863 CC lib/ftl/ftl_l2p_cache.o 00:04:32.863 CC lib/ftl/ftl_p2l.o 00:04:32.863 CC lib/ftl/mngt/ftl_mngt.o 00:04:32.863 CC lib/vhost/rte_vhost_user.o 00:04:32.863 CC lib/iscsi/iscsi_rpc.o 00:04:33.121 CC lib/iscsi/task.o 00:04:33.121 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:33.121 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:33.121 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:33.380 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:33.380 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:33.380 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:33.380 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:33.380 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:33.380 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:33.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.638 LIB libspdk_iscsi.a 00:04:33.638 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.638 LIB libspdk_nvmf.a 00:04:33.638 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.638 CC lib/ftl/utils/ftl_conf.o 00:04:33.638 CC lib/ftl/utils/ftl_md.o 00:04:33.638 SO libspdk_iscsi.so.8.0 00:04:33.638 CC lib/ftl/utils/ftl_mempool.o 00:04:33.638 CC lib/ftl/utils/ftl_bitmap.o 00:04:33.638 SO libspdk_nvmf.so.19.0 00:04:33.896 CC lib/ftl/utils/ftl_property.o 00:04:33.896 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.896 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:33.896 SYMLINK libspdk_iscsi.so 00:04:33.896 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:33.896 LIB libspdk_vhost.a 00:04:33.896 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:33.896 SO libspdk_vhost.so.8.0 00:04:33.896 SYMLINK libspdk_nvmf.so 00:04:33.896 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:33.896 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.896 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:34.154 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:34.154 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:34.154 SYMLINK libspdk_vhost.so 00:04:34.154 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:34.154 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:34.154 CC lib/ftl/base/ftl_base_dev.o 00:04:34.154 CC lib/ftl/base/ftl_base_bdev.o 00:04:34.154 CC lib/ftl/ftl_trace.o 00:04:34.412 LIB libspdk_ftl.a 00:04:34.671 SO libspdk_ftl.so.9.0 00:04:34.929 SYMLINK libspdk_ftl.so 00:04:35.188 CC module/env_dpdk/env_dpdk_rpc.o 00:04:35.446 CC module/accel/iaa/accel_iaa.o 00:04:35.446 CC module/blob/bdev/blob_bdev.o 00:04:35.446 CC module/accel/ioat/accel_ioat.o 00:04:35.446 CC module/accel/error/accel_error.o 00:04:35.446 CC module/sock/uring/uring.o 00:04:35.446 CC module/accel/dsa/accel_dsa.o 00:04:35.446 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:35.446 CC module/keyring/file/keyring.o 00:04:35.446 CC module/sock/posix/posix.o 00:04:35.446 LIB libspdk_env_dpdk_rpc.a 00:04:35.446 SO libspdk_env_dpdk_rpc.so.6.0 00:04:35.446 SYMLINK libspdk_env_dpdk_rpc.so 00:04:35.446 CC module/accel/dsa/accel_dsa_rpc.o 00:04:35.446 CC module/keyring/file/keyring_rpc.o 00:04:35.446 CC module/accel/error/accel_error_rpc.o 00:04:35.446 CC module/accel/ioat/accel_ioat_rpc.o 00:04:35.446 CC module/accel/iaa/accel_iaa_rpc.o 00:04:35.446 LIB libspdk_scheduler_dynamic.a 00:04:35.704 SO libspdk_scheduler_dynamic.so.4.0 00:04:35.704 LIB libspdk_blob_bdev.a 00:04:35.704 SYMLINK libspdk_scheduler_dynamic.so 00:04:35.704 SO libspdk_blob_bdev.so.11.0 00:04:35.704 LIB libspdk_accel_dsa.a 00:04:35.704 LIB libspdk_keyring_file.a 00:04:35.704 SO libspdk_accel_dsa.so.5.0 00:04:35.704 SO libspdk_keyring_file.so.1.0 00:04:35.704 LIB libspdk_accel_ioat.a 00:04:35.704 LIB libspdk_accel_error.a 00:04:35.704 LIB libspdk_accel_iaa.a 00:04:35.704 SYMLINK libspdk_blob_bdev.so 00:04:35.704 SO libspdk_accel_error.so.2.0 00:04:35.704 SO libspdk_accel_ioat.so.6.0 00:04:35.704 SO libspdk_accel_iaa.so.3.0 00:04:35.704 SYMLINK libspdk_keyring_file.so 00:04:35.704 SYMLINK libspdk_accel_dsa.so 00:04:35.704 SYMLINK libspdk_accel_ioat.so 00:04:35.704 SYMLINK libspdk_accel_error.so 00:04:35.704 SYMLINK libspdk_accel_iaa.so 00:04:35.704 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:35.963 CC module/scheduler/gscheduler/gscheduler.o 00:04:35.963 CC module/keyring/linux/keyring.o 00:04:35.963 CC module/bdev/error/vbdev_error.o 00:04:35.963 CC module/bdev/delay/vbdev_delay.o 00:04:35.963 LIB libspdk_scheduler_dpdk_governor.a 00:04:35.963 CC module/bdev/gpt/gpt.o 00:04:35.963 CC module/bdev/lvol/vbdev_lvol.o 00:04:35.963 LIB libspdk_scheduler_gscheduler.a 00:04:35.963 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:35.963 SO libspdk_scheduler_gscheduler.so.4.0 00:04:35.963 CC module/blobfs/bdev/blobfs_bdev.o 00:04:35.963 LIB libspdk_sock_uring.a 00:04:35.963 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:35.963 CC module/keyring/linux/keyring_rpc.o 00:04:36.222 SYMLINK libspdk_scheduler_gscheduler.so 00:04:36.222 SO libspdk_sock_uring.so.5.0 00:04:36.222 CC module/bdev/gpt/vbdev_gpt.o 00:04:36.222 LIB libspdk_sock_posix.a 00:04:36.222 CC module/bdev/error/vbdev_error_rpc.o 00:04:36.222 SYMLINK libspdk_sock_uring.so 00:04:36.222 SO libspdk_sock_posix.so.6.0 00:04:36.222 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.222 LIB libspdk_keyring_linux.a 00:04:36.222 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:36.222 SYMLINK libspdk_sock_posix.so 00:04:36.222 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.222 SO libspdk_keyring_linux.so.1.0 00:04:36.222 SYMLINK libspdk_keyring_linux.so 00:04:36.480 LIB libspdk_bdev_error.a 00:04:36.480 SO libspdk_bdev_error.so.6.0 00:04:36.480 LIB libspdk_bdev_gpt.a 00:04:36.481 CC module/bdev/malloc/bdev_malloc.o 00:04:36.481 LIB libspdk_blobfs_bdev.a 00:04:36.481 SO libspdk_bdev_gpt.so.6.0 00:04:36.481 CC module/bdev/null/bdev_null.o 00:04:36.481 SO libspdk_blobfs_bdev.so.6.0 00:04:36.481 LIB libspdk_bdev_delay.a 00:04:36.481 SYMLINK libspdk_bdev_error.so 00:04:36.481 SO libspdk_bdev_delay.so.6.0 00:04:36.481 SYMLINK libspdk_bdev_gpt.so 00:04:36.481 CC module/bdev/nvme/bdev_nvme.o 00:04:36.481 CC module/bdev/passthru/vbdev_passthru.o 00:04:36.481 SYMLINK libspdk_blobfs_bdev.so 00:04:36.481 LIB libspdk_bdev_lvol.a 00:04:36.481 SYMLINK libspdk_bdev_delay.so 00:04:36.481 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.481 SO libspdk_bdev_lvol.so.6.0 00:04:36.739 CC module/bdev/raid/bdev_raid.o 00:04:36.739 CC module/bdev/split/vbdev_split.o 00:04:36.739 SYMLINK libspdk_bdev_lvol.so 00:04:36.739 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:36.739 CC module/bdev/null/bdev_null_rpc.o 00:04:36.739 CC module/bdev/uring/bdev_uring.o 00:04:36.739 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:36.739 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.997 CC module/bdev/aio/bdev_aio.o 00:04:36.997 LIB libspdk_bdev_null.a 00:04:36.997 CC module/bdev/split/vbdev_split_rpc.o 00:04:36.997 SO libspdk_bdev_null.so.6.0 00:04:36.997 LIB libspdk_bdev_malloc.a 00:04:36.997 LIB libspdk_bdev_passthru.a 00:04:36.997 SO libspdk_bdev_malloc.so.6.0 00:04:36.997 SYMLINK libspdk_bdev_null.so 00:04:36.997 CC module/bdev/nvme/nvme_rpc.o 00:04:36.997 SO libspdk_bdev_passthru.so.6.0 00:04:36.997 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:36.997 SYMLINK libspdk_bdev_malloc.so 00:04:36.997 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.997 LIB libspdk_bdev_split.a 00:04:36.997 CC module/bdev/uring/bdev_uring_rpc.o 00:04:36.997 SYMLINK libspdk_bdev_passthru.so 00:04:36.997 SO libspdk_bdev_split.so.6.0 00:04:37.255 CC module/bdev/nvme/vbdev_opal.o 00:04:37.255 SYMLINK libspdk_bdev_split.so 00:04:37.255 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:37.255 LIB libspdk_bdev_zone_block.a 00:04:37.255 CC module/bdev/aio/bdev_aio_rpc.o 00:04:37.255 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:37.255 SO libspdk_bdev_zone_block.so.6.0 00:04:37.256 LIB libspdk_bdev_uring.a 00:04:37.256 CC module/bdev/ftl/bdev_ftl.o 00:04:37.256 SO libspdk_bdev_uring.so.6.0 00:04:37.256 SYMLINK libspdk_bdev_zone_block.so 00:04:37.514 SYMLINK libspdk_bdev_uring.so 00:04:37.514 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.514 LIB libspdk_bdev_aio.a 00:04:37.514 CC module/bdev/raid/bdev_raid_rpc.o 00:04:37.514 SO libspdk_bdev_aio.so.6.0 00:04:37.514 CC module/bdev/raid/bdev_raid_sb.o 00:04:37.514 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.514 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.514 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.514 SYMLINK libspdk_bdev_aio.so 00:04:37.514 CC module/bdev/raid/raid0.o 00:04:37.514 CC module/bdev/raid/raid1.o 00:04:37.514 LIB libspdk_bdev_ftl.a 00:04:37.514 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.514 CC module/bdev/raid/concat.o 00:04:37.773 SO libspdk_bdev_ftl.so.6.0 00:04:37.773 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.773 SYMLINK libspdk_bdev_ftl.so 00:04:37.773 LIB libspdk_bdev_iscsi.a 00:04:37.773 SO libspdk_bdev_iscsi.so.6.0 00:04:37.773 LIB libspdk_bdev_raid.a 00:04:38.031 SYMLINK libspdk_bdev_iscsi.so 00:04:38.031 SO libspdk_bdev_raid.so.6.0 00:04:38.031 LIB libspdk_bdev_virtio.a 00:04:38.031 SYMLINK libspdk_bdev_raid.so 00:04:38.031 SO libspdk_bdev_virtio.so.6.0 00:04:38.288 SYMLINK libspdk_bdev_virtio.so 00:04:38.546 LIB libspdk_bdev_nvme.a 00:04:38.805 SO libspdk_bdev_nvme.so.7.0 00:04:38.805 SYMLINK libspdk_bdev_nvme.so 00:04:39.371 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:39.371 CC module/event/subsystems/iobuf/iobuf.o 00:04:39.371 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:39.371 CC module/event/subsystems/keyring/keyring.o 00:04:39.371 CC module/event/subsystems/scheduler/scheduler.o 00:04:39.371 CC module/event/subsystems/vmd/vmd.o 00:04:39.371 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:39.371 CC module/event/subsystems/sock/sock.o 00:04:39.371 LIB libspdk_event_scheduler.a 00:04:39.371 LIB libspdk_event_keyring.a 00:04:39.371 LIB libspdk_event_vhost_blk.a 00:04:39.371 LIB libspdk_event_vmd.a 00:04:39.371 SO libspdk_event_scheduler.so.4.0 00:04:39.371 SO libspdk_event_keyring.so.1.0 00:04:39.371 LIB libspdk_event_iobuf.a 00:04:39.371 LIB libspdk_event_sock.a 00:04:39.371 SO libspdk_event_vhost_blk.so.3.0 00:04:39.371 SO libspdk_event_vmd.so.6.0 00:04:39.630 SO libspdk_event_sock.so.5.0 00:04:39.630 SO libspdk_event_iobuf.so.3.0 00:04:39.630 SYMLINK libspdk_event_scheduler.so 00:04:39.630 SYMLINK libspdk_event_keyring.so 00:04:39.630 SYMLINK libspdk_event_vhost_blk.so 00:04:39.630 SYMLINK libspdk_event_vmd.so 00:04:39.630 SYMLINK libspdk_event_sock.so 00:04:39.630 SYMLINK libspdk_event_iobuf.so 00:04:39.889 CC module/event/subsystems/accel/accel.o 00:04:40.148 LIB libspdk_event_accel.a 00:04:40.148 SO libspdk_event_accel.so.6.0 00:04:40.148 SYMLINK libspdk_event_accel.so 00:04:40.406 CC module/event/subsystems/bdev/bdev.o 00:04:40.664 LIB libspdk_event_bdev.a 00:04:40.664 SO libspdk_event_bdev.so.6.0 00:04:40.664 SYMLINK libspdk_event_bdev.so 00:04:40.922 CC module/event/subsystems/nbd/nbd.o 00:04:40.922 CC module/event/subsystems/scsi/scsi.o 00:04:40.922 CC module/event/subsystems/ublk/ublk.o 00:04:40.922 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:40.922 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:41.180 LIB libspdk_event_ublk.a 00:04:41.180 LIB libspdk_event_scsi.a 00:04:41.180 LIB libspdk_event_nbd.a 00:04:41.181 SO libspdk_event_ublk.so.3.0 00:04:41.181 SO libspdk_event_scsi.so.6.0 00:04:41.181 SO libspdk_event_nbd.so.6.0 00:04:41.181 SYMLINK libspdk_event_ublk.so 00:04:41.181 SYMLINK libspdk_event_scsi.so 00:04:41.181 SYMLINK libspdk_event_nbd.so 00:04:41.181 LIB libspdk_event_nvmf.a 00:04:41.181 SO libspdk_event_nvmf.so.6.0 00:04:41.438 SYMLINK libspdk_event_nvmf.so 00:04:41.438 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:41.438 CC module/event/subsystems/iscsi/iscsi.o 00:04:41.696 LIB libspdk_event_vhost_scsi.a 00:04:41.696 LIB libspdk_event_iscsi.a 00:04:41.696 SO libspdk_event_vhost_scsi.so.3.0 00:04:41.696 SO libspdk_event_iscsi.so.6.0 00:04:41.696 SYMLINK libspdk_event_vhost_scsi.so 00:04:41.696 SYMLINK libspdk_event_iscsi.so 00:04:41.954 SO libspdk.so.6.0 00:04:41.954 SYMLINK libspdk.so 00:04:42.213 TEST_HEADER include/spdk/accel.h 00:04:42.213 TEST_HEADER include/spdk/accel_module.h 00:04:42.213 CC test/rpc_client/rpc_client_test.o 00:04:42.213 CXX app/trace/trace.o 00:04:42.213 TEST_HEADER include/spdk/assert.h 00:04:42.213 TEST_HEADER include/spdk/barrier.h 00:04:42.213 TEST_HEADER include/spdk/base64.h 00:04:42.213 TEST_HEADER include/spdk/bdev.h 00:04:42.213 TEST_HEADER include/spdk/bdev_module.h 00:04:42.213 TEST_HEADER include/spdk/bdev_zone.h 00:04:42.213 TEST_HEADER include/spdk/bit_array.h 00:04:42.213 TEST_HEADER include/spdk/bit_pool.h 00:04:42.213 TEST_HEADER include/spdk/blob_bdev.h 00:04:42.213 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:42.213 TEST_HEADER include/spdk/blobfs.h 00:04:42.213 TEST_HEADER include/spdk/blob.h 00:04:42.213 TEST_HEADER include/spdk/conf.h 00:04:42.213 TEST_HEADER include/spdk/config.h 00:04:42.213 TEST_HEADER include/spdk/cpuset.h 00:04:42.213 TEST_HEADER include/spdk/crc16.h 00:04:42.213 TEST_HEADER include/spdk/crc32.h 00:04:42.213 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:42.213 TEST_HEADER include/spdk/crc64.h 00:04:42.213 TEST_HEADER include/spdk/dif.h 00:04:42.213 TEST_HEADER include/spdk/dma.h 00:04:42.213 TEST_HEADER include/spdk/endian.h 00:04:42.213 TEST_HEADER include/spdk/env_dpdk.h 00:04:42.213 TEST_HEADER include/spdk/env.h 00:04:42.213 TEST_HEADER include/spdk/event.h 00:04:42.213 TEST_HEADER include/spdk/fd_group.h 00:04:42.213 TEST_HEADER include/spdk/fd.h 00:04:42.213 TEST_HEADER include/spdk/file.h 00:04:42.213 TEST_HEADER include/spdk/ftl.h 00:04:42.213 TEST_HEADER include/spdk/gpt_spec.h 00:04:42.213 TEST_HEADER include/spdk/hexlify.h 00:04:42.213 TEST_HEADER include/spdk/histogram_data.h 00:04:42.213 TEST_HEADER include/spdk/idxd.h 00:04:42.213 TEST_HEADER include/spdk/idxd_spec.h 00:04:42.213 TEST_HEADER include/spdk/init.h 00:04:42.213 CC examples/util/zipf/zipf.o 00:04:42.213 TEST_HEADER include/spdk/ioat.h 00:04:42.213 TEST_HEADER include/spdk/ioat_spec.h 00:04:42.213 TEST_HEADER include/spdk/iscsi_spec.h 00:04:42.213 TEST_HEADER include/spdk/json.h 00:04:42.213 CC examples/ioat/perf/perf.o 00:04:42.213 TEST_HEADER include/spdk/jsonrpc.h 00:04:42.213 TEST_HEADER include/spdk/keyring.h 00:04:42.213 TEST_HEADER include/spdk/keyring_module.h 00:04:42.213 CC test/thread/poller_perf/poller_perf.o 00:04:42.213 TEST_HEADER include/spdk/likely.h 00:04:42.213 TEST_HEADER include/spdk/log.h 00:04:42.213 TEST_HEADER include/spdk/lvol.h 00:04:42.213 TEST_HEADER include/spdk/memory.h 00:04:42.213 TEST_HEADER include/spdk/mmio.h 00:04:42.213 TEST_HEADER include/spdk/nbd.h 00:04:42.213 TEST_HEADER include/spdk/net.h 00:04:42.213 CC test/dma/test_dma/test_dma.o 00:04:42.213 TEST_HEADER include/spdk/notify.h 00:04:42.213 TEST_HEADER include/spdk/nvme.h 00:04:42.213 TEST_HEADER include/spdk/nvme_intel.h 00:04:42.213 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:42.213 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:42.213 TEST_HEADER include/spdk/nvme_spec.h 00:04:42.213 TEST_HEADER include/spdk/nvme_zns.h 00:04:42.213 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:42.213 CC test/app/bdev_svc/bdev_svc.o 00:04:42.214 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:42.214 TEST_HEADER include/spdk/nvmf.h 00:04:42.214 TEST_HEADER include/spdk/nvmf_spec.h 00:04:42.214 TEST_HEADER include/spdk/nvmf_transport.h 00:04:42.214 TEST_HEADER include/spdk/opal.h 00:04:42.214 TEST_HEADER include/spdk/opal_spec.h 00:04:42.214 TEST_HEADER include/spdk/pci_ids.h 00:04:42.214 TEST_HEADER include/spdk/pipe.h 00:04:42.214 TEST_HEADER include/spdk/queue.h 00:04:42.214 TEST_HEADER include/spdk/reduce.h 00:04:42.214 TEST_HEADER include/spdk/rpc.h 00:04:42.214 TEST_HEADER include/spdk/scheduler.h 00:04:42.472 TEST_HEADER include/spdk/scsi.h 00:04:42.472 TEST_HEADER include/spdk/scsi_spec.h 00:04:42.472 TEST_HEADER include/spdk/sock.h 00:04:42.472 TEST_HEADER include/spdk/stdinc.h 00:04:42.472 TEST_HEADER include/spdk/string.h 00:04:42.472 TEST_HEADER include/spdk/thread.h 00:04:42.472 TEST_HEADER include/spdk/trace.h 00:04:42.472 TEST_HEADER include/spdk/trace_parser.h 00:04:42.472 TEST_HEADER include/spdk/tree.h 00:04:42.472 TEST_HEADER include/spdk/ublk.h 00:04:42.472 TEST_HEADER include/spdk/util.h 00:04:42.472 TEST_HEADER include/spdk/uuid.h 00:04:42.472 TEST_HEADER include/spdk/version.h 00:04:42.472 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:42.472 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:42.472 LINK rpc_client_test 00:04:42.472 TEST_HEADER include/spdk/vhost.h 00:04:42.472 TEST_HEADER include/spdk/vmd.h 00:04:42.472 TEST_HEADER include/spdk/xor.h 00:04:42.472 TEST_HEADER include/spdk/zipf.h 00:04:42.472 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.472 CXX test/cpp_headers/accel.o 00:04:42.472 LINK zipf 00:04:42.472 LINK poller_perf 00:04:42.472 LINK interrupt_tgt 00:04:42.472 LINK ioat_perf 00:04:42.472 CXX test/cpp_headers/accel_module.o 00:04:42.472 LINK bdev_svc 00:04:42.472 CXX test/cpp_headers/assert.o 00:04:42.472 LINK spdk_trace 00:04:42.472 CXX test/cpp_headers/barrier.o 00:04:42.729 CC app/trace_record/trace_record.o 00:04:42.729 LINK test_dma 00:04:42.729 CXX test/cpp_headers/base64.o 00:04:42.729 CC examples/ioat/verify/verify.o 00:04:42.729 CXX test/cpp_headers/bdev.o 00:04:42.987 CC examples/sock/hello_world/hello_sock.o 00:04:42.987 CC examples/thread/thread/thread_ex.o 00:04:42.987 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.987 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.987 LINK spdk_trace_record 00:04:42.987 LINK verify 00:04:42.987 CXX test/cpp_headers/bdev_module.o 00:04:42.987 LINK mem_callbacks 00:04:42.987 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:42.987 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.987 LINK lsvmd 00:04:43.245 LINK hello_sock 00:04:43.245 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.245 LINK thread 00:04:43.245 CXX test/cpp_headers/bdev_zone.o 00:04:43.245 CC test/env/vtophys/vtophys.o 00:04:43.245 CC app/nvmf_tgt/nvmf_main.o 00:04:43.245 CC app/iscsi_tgt/iscsi_tgt.o 00:04:43.245 LINK nvme_fuzz 00:04:43.245 CC examples/vmd/led/led.o 00:04:43.503 CXX test/cpp_headers/bit_array.o 00:04:43.503 LINK vtophys 00:04:43.503 LINK nvmf_tgt 00:04:43.503 LINK iscsi_tgt 00:04:43.503 LINK led 00:04:43.503 CC examples/idxd/perf/perf.o 00:04:43.503 CXX test/cpp_headers/bit_pool.o 00:04:43.503 LINK vhost_fuzz 00:04:43.503 CC test/event/event_perf/event_perf.o 00:04:43.503 CC test/event/reactor/reactor.o 00:04:43.761 CXX test/cpp_headers/blob_bdev.o 00:04:43.761 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:43.761 CXX test/cpp_headers/blobfs_bdev.o 00:04:43.761 CXX test/cpp_headers/blobfs.o 00:04:43.761 LINK event_perf 00:04:43.761 LINK reactor 00:04:43.761 LINK env_dpdk_post_init 00:04:43.761 LINK idxd_perf 00:04:44.020 CC app/spdk_tgt/spdk_tgt.o 00:04:44.020 CXX test/cpp_headers/blob.o 00:04:44.020 CC examples/nvme/hello_world/hello_world.o 00:04:44.020 CC test/event/reactor_perf/reactor_perf.o 00:04:44.020 CC examples/accel/perf/accel_perf.o 00:04:44.020 CXX test/cpp_headers/conf.o 00:04:44.020 CC test/env/memory/memory_ut.o 00:04:44.020 CC examples/nvme/reconnect/reconnect.o 00:04:44.020 CC test/nvme/aer/aer.o 00:04:44.020 LINK spdk_tgt 00:04:44.020 CC examples/blob/hello_world/hello_blob.o 00:04:44.278 LINK reactor_perf 00:04:44.278 LINK hello_world 00:04:44.278 CXX test/cpp_headers/config.o 00:04:44.278 CXX test/cpp_headers/cpuset.o 00:04:44.278 LINK hello_blob 00:04:44.536 LINK aer 00:04:44.536 CXX test/cpp_headers/crc16.o 00:04:44.536 CC app/spdk_lspci/spdk_lspci.o 00:04:44.536 LINK reconnect 00:04:44.536 CC test/event/app_repeat/app_repeat.o 00:04:44.536 LINK accel_perf 00:04:44.536 CC test/accel/dif/dif.o 00:04:44.536 LINK spdk_lspci 00:04:44.536 CXX test/cpp_headers/crc32.o 00:04:44.536 LINK app_repeat 00:04:44.536 LINK iscsi_fuzz 00:04:44.536 CXX test/cpp_headers/crc64.o 00:04:44.794 CC test/nvme/reset/reset.o 00:04:44.794 CC examples/blob/cli/blobcli.o 00:04:44.794 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:44.794 CXX test/cpp_headers/dif.o 00:04:44.794 CC app/spdk_nvme_perf/perf.o 00:04:44.794 CC app/spdk_nvme_identify/identify.o 00:04:45.051 LINK reset 00:04:45.051 CC test/event/scheduler/scheduler.o 00:04:45.051 CC test/app/histogram_perf/histogram_perf.o 00:04:45.051 CXX test/cpp_headers/dma.o 00:04:45.051 LINK dif 00:04:45.051 LINK histogram_perf 00:04:45.051 CXX test/cpp_headers/endian.o 00:04:45.309 CC test/nvme/sgl/sgl.o 00:04:45.309 LINK blobcli 00:04:45.309 LINK scheduler 00:04:45.309 CXX test/cpp_headers/env_dpdk.o 00:04:45.309 LINK nvme_manage 00:04:45.309 LINK memory_ut 00:04:45.309 CXX test/cpp_headers/env.o 00:04:45.309 CC test/app/jsoncat/jsoncat.o 00:04:45.568 LINK sgl 00:04:45.568 CC examples/nvme/arbitration/arbitration.o 00:04:45.568 CC app/spdk_nvme_discover/discovery_aer.o 00:04:45.568 CC test/env/pci/pci_ut.o 00:04:45.568 LINK jsoncat 00:04:45.568 CXX test/cpp_headers/event.o 00:04:45.568 CC test/blobfs/mkfs/mkfs.o 00:04:45.568 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.568 LINK spdk_nvme_identify 00:04:45.826 LINK spdk_nvme_perf 00:04:45.826 CXX test/cpp_headers/fd_group.o 00:04:45.826 LINK spdk_nvme_discover 00:04:45.826 CC test/nvme/e2edp/nvme_dp.o 00:04:45.826 CC test/app/stub/stub.o 00:04:45.826 LINK mkfs 00:04:45.826 CXX test/cpp_headers/fd.o 00:04:45.826 LINK arbitration 00:04:45.826 CXX test/cpp_headers/file.o 00:04:45.826 LINK hello_bdev 00:04:45.826 LINK pci_ut 00:04:45.826 CC app/spdk_top/spdk_top.o 00:04:46.084 LINK stub 00:04:46.084 CC test/nvme/overhead/overhead.o 00:04:46.084 LINK nvme_dp 00:04:46.084 CXX test/cpp_headers/ftl.o 00:04:46.084 CC examples/nvme/hotplug/hotplug.o 00:04:46.084 CC test/nvme/err_injection/err_injection.o 00:04:46.084 CC test/nvme/startup/startup.o 00:04:46.084 CXX test/cpp_headers/gpt_spec.o 00:04:46.084 CC examples/bdev/bdevperf/bdevperf.o 00:04:46.341 LINK overhead 00:04:46.341 LINK startup 00:04:46.341 LINK err_injection 00:04:46.341 CXX test/cpp_headers/hexlify.o 00:04:46.341 LINK hotplug 00:04:46.341 CC app/vhost/vhost.o 00:04:46.341 CC test/bdev/bdevio/bdevio.o 00:04:46.341 CXX test/cpp_headers/histogram_data.o 00:04:46.341 CC test/lvol/esnap/esnap.o 00:04:46.341 CXX test/cpp_headers/idxd.o 00:04:46.599 CC test/nvme/reserve/reserve.o 00:04:46.599 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.599 CC test/nvme/simple_copy/simple_copy.o 00:04:46.599 LINK vhost 00:04:46.599 CXX test/cpp_headers/idxd_spec.o 00:04:46.599 CC examples/nvme/abort/abort.o 00:04:46.858 LINK cmb_copy 00:04:46.858 LINK reserve 00:04:46.858 LINK bdevio 00:04:46.858 CXX test/cpp_headers/init.o 00:04:46.858 LINK simple_copy 00:04:46.858 LINK spdk_top 00:04:46.858 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.858 LINK bdevperf 00:04:46.858 CXX test/cpp_headers/ioat.o 00:04:47.116 LINK pmr_persistence 00:04:47.116 CC app/spdk_dd/spdk_dd.o 00:04:47.116 LINK abort 00:04:47.116 CC test/nvme/connect_stress/connect_stress.o 00:04:47.116 CC test/nvme/compliance/nvme_compliance.o 00:04:47.116 CC test/nvme/boot_partition/boot_partition.o 00:04:47.116 CC app/fio/nvme/fio_plugin.o 00:04:47.116 CXX test/cpp_headers/ioat_spec.o 00:04:47.116 CXX test/cpp_headers/iscsi_spec.o 00:04:47.116 CXX test/cpp_headers/json.o 00:04:47.374 LINK connect_stress 00:04:47.374 LINK boot_partition 00:04:47.374 CXX test/cpp_headers/jsonrpc.o 00:04:47.374 CXX test/cpp_headers/keyring.o 00:04:47.374 LINK nvme_compliance 00:04:47.374 CXX test/cpp_headers/keyring_module.o 00:04:47.374 CC app/fio/bdev/fio_plugin.o 00:04:47.374 CC examples/nvmf/nvmf/nvmf.o 00:04:47.631 LINK spdk_dd 00:04:47.631 CXX test/cpp_headers/likely.o 00:04:47.631 CC test/nvme/fused_ordering/fused_ordering.o 00:04:47.631 CXX test/cpp_headers/log.o 00:04:47.631 CXX test/cpp_headers/lvol.o 00:04:47.631 LINK spdk_nvme 00:04:47.631 CXX test/cpp_headers/memory.o 00:04:47.631 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:47.631 LINK fused_ordering 00:04:47.889 CXX test/cpp_headers/mmio.o 00:04:47.889 CC test/nvme/fdp/fdp.o 00:04:47.889 CXX test/cpp_headers/nbd.o 00:04:47.889 CC test/nvme/cuse/cuse.o 00:04:47.889 CXX test/cpp_headers/net.o 00:04:47.889 LINK nvmf 00:04:47.889 CXX test/cpp_headers/notify.o 00:04:47.889 CXX test/cpp_headers/nvme.o 00:04:47.889 CXX test/cpp_headers/nvme_intel.o 00:04:47.889 LINK doorbell_aers 00:04:47.889 LINK spdk_bdev 00:04:47.889 CXX test/cpp_headers/nvme_ocssd.o 00:04:48.148 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:48.149 CXX test/cpp_headers/nvme_spec.o 00:04:48.149 CXX test/cpp_headers/nvme_zns.o 00:04:48.149 CXX test/cpp_headers/nvmf_cmd.o 00:04:48.149 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:48.149 CXX test/cpp_headers/nvmf.o 00:04:48.149 LINK fdp 00:04:48.149 CXX test/cpp_headers/nvmf_spec.o 00:04:48.149 CXX test/cpp_headers/nvmf_transport.o 00:04:48.149 CXX test/cpp_headers/opal.o 00:04:48.407 CXX test/cpp_headers/opal_spec.o 00:04:48.407 CXX test/cpp_headers/pci_ids.o 00:04:48.407 CXX test/cpp_headers/pipe.o 00:04:48.407 CXX test/cpp_headers/queue.o 00:04:48.407 CXX test/cpp_headers/reduce.o 00:04:48.407 CXX test/cpp_headers/rpc.o 00:04:48.407 CXX test/cpp_headers/scheduler.o 00:04:48.407 CXX test/cpp_headers/scsi.o 00:04:48.407 CXX test/cpp_headers/scsi_spec.o 00:04:48.408 CXX test/cpp_headers/sock.o 00:04:48.408 CXX test/cpp_headers/stdinc.o 00:04:48.408 CXX test/cpp_headers/string.o 00:04:48.408 CXX test/cpp_headers/thread.o 00:04:48.408 CXX test/cpp_headers/trace.o 00:04:48.408 CXX test/cpp_headers/trace_parser.o 00:04:48.666 CXX test/cpp_headers/tree.o 00:04:48.666 CXX test/cpp_headers/ublk.o 00:04:48.666 CXX test/cpp_headers/util.o 00:04:48.666 CXX test/cpp_headers/uuid.o 00:04:48.666 CXX test/cpp_headers/version.o 00:04:48.666 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.666 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.666 CXX test/cpp_headers/vhost.o 00:04:48.666 CXX test/cpp_headers/vmd.o 00:04:48.666 CXX test/cpp_headers/xor.o 00:04:48.666 CXX test/cpp_headers/zipf.o 00:04:49.233 LINK cuse 00:04:51.137 LINK esnap 00:04:51.395 ************************************ 00:04:51.395 END TEST make 00:04:51.395 ************************************ 00:04:51.395 00:04:51.395 real 0m56.652s 00:04:51.395 user 5m11.105s 00:04:51.395 sys 1m2.995s 00:04:51.395 01:48:06 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:51.395 01:48:06 make -- common/autotest_common.sh@10 -- $ set +x 00:04:51.395 01:48:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:51.395 01:48:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:51.395 01:48:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:51.395 01:48:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.395 01:48:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:51.395 01:48:06 -- pm/common@44 -- $ pid=5931 00:04:51.395 01:48:06 -- pm/common@50 -- $ kill -TERM 5931 00:04:51.395 01:48:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.395 01:48:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:51.395 01:48:06 -- pm/common@44 -- $ pid=5933 00:04:51.395 01:48:06 -- pm/common@50 -- $ kill -TERM 5933 00:04:51.654 01:48:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.654 01:48:06 -- nvmf/common.sh@7 -- # uname -s 00:04:51.654 01:48:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.654 01:48:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.654 01:48:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.654 01:48:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.654 01:48:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.654 01:48:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.654 01:48:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.654 01:48:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.654 01:48:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.654 01:48:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.654 01:48:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:04:51.655 01:48:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:04:51.655 01:48:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.655 01:48:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.655 01:48:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:51.655 01:48:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.655 01:48:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.655 01:48:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.655 01:48:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.655 01:48:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.655 01:48:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.655 01:48:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.655 01:48:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.655 01:48:06 -- paths/export.sh@5 -- # export PATH 00:04:51.655 01:48:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.655 01:48:06 -- nvmf/common.sh@47 -- # : 0 00:04:51.655 01:48:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.655 01:48:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.655 01:48:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.655 01:48:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.655 01:48:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.655 01:48:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.655 01:48:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.655 01:48:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.655 01:48:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:51.655 01:48:06 -- spdk/autotest.sh@32 -- # uname -s 00:04:51.655 01:48:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:51.655 01:48:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:51.655 01:48:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:51.655 01:48:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:51.655 01:48:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:51.655 01:48:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:51.655 01:48:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:51.655 01:48:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:51.655 01:48:06 -- spdk/autotest.sh@48 -- # udevadm_pid=66359 00:04:51.655 01:48:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:51.655 01:48:06 -- pm/common@17 -- # local monitor 00:04:51.655 01:48:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.655 01:48:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.655 01:48:06 -- pm/common@25 -- # sleep 1 00:04:51.655 01:48:06 -- pm/common@21 -- # date +%s 00:04:51.655 01:48:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:51.655 01:48:06 -- pm/common@21 -- # date +%s 00:04:51.655 01:48:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721872086 00:04:51.655 01:48:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721872086 00:04:51.655 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721872086_collect-vmstat.pm.log 00:04:51.655 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721872086_collect-cpu-load.pm.log 00:04:52.592 01:48:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:52.592 01:48:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:52.592 01:48:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.592 01:48:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.592 01:48:07 -- spdk/autotest.sh@59 -- # create_test_list 00:04:52.592 01:48:07 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:52.592 01:48:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.851 01:48:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:52.851 01:48:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:52.851 01:48:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:52.851 01:48:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:52.851 01:48:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:52.851 01:48:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:52.851 01:48:07 -- common/autotest_common.sh@1455 -- # uname 00:04:52.851 01:48:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:52.851 01:48:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:52.851 01:48:07 -- common/autotest_common.sh@1475 -- # uname 00:04:52.851 01:48:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:52.851 01:48:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:52.851 01:48:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:52.851 01:48:07 -- spdk/autotest.sh@72 -- # hash lcov 00:04:52.851 01:48:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:52.851 01:48:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:52.851 --rc lcov_branch_coverage=1 00:04:52.851 --rc lcov_function_coverage=1 00:04:52.851 --rc genhtml_branch_coverage=1 00:04:52.851 --rc genhtml_function_coverage=1 00:04:52.851 --rc genhtml_legend=1 00:04:52.851 --rc geninfo_all_blocks=1 00:04:52.851 ' 00:04:52.851 01:48:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:52.851 --rc lcov_branch_coverage=1 00:04:52.851 --rc lcov_function_coverage=1 00:04:52.851 --rc genhtml_branch_coverage=1 00:04:52.851 --rc genhtml_function_coverage=1 00:04:52.851 --rc genhtml_legend=1 00:04:52.851 --rc geninfo_all_blocks=1 00:04:52.851 ' 00:04:52.851 01:48:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:52.851 --rc lcov_branch_coverage=1 00:04:52.851 --rc lcov_function_coverage=1 00:04:52.851 --rc genhtml_branch_coverage=1 00:04:52.851 --rc genhtml_function_coverage=1 00:04:52.851 --rc genhtml_legend=1 00:04:52.851 --rc geninfo_all_blocks=1 00:04:52.851 --no-external' 00:04:52.851 01:48:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:52.851 --rc lcov_branch_coverage=1 00:04:52.851 --rc lcov_function_coverage=1 00:04:52.851 --rc genhtml_branch_coverage=1 00:04:52.851 --rc genhtml_function_coverage=1 00:04:52.851 --rc genhtml_legend=1 00:04:52.851 --rc geninfo_all_blocks=1 00:04:52.851 --no-external' 00:04:52.851 01:48:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:52.851 lcov: LCOV version 1.14 00:04:52.851 01:48:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:07.758 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:07.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:17.732 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:17.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:17.732 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:17.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:17.732 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:17.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:17.732 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:17.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:17.732 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:17.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:17.732 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:17.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:17.733 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:17.733 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:17.734 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:17.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:20.265 01:48:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:20.265 01:48:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.265 01:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.265 01:48:35 -- spdk/autotest.sh@91 -- # rm -f 00:05:20.265 01:48:35 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.833 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:20.833 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:20.833 01:48:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:20.833 01:48:35 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:20.833 01:48:35 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:20.833 01:48:35 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:20.833 01:48:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.833 01:48:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:20.833 01:48:35 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:20.833 01:48:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.833 01:48:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:20.833 01:48:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:20.833 01:48:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.833 01:48:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:20.833 01:48:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:20.833 01:48:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:20.833 01:48:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:20.833 01:48:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:20.833 01:48:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:20.833 01:48:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:20.833 01:48:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:20.833 01:48:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.833 01:48:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:20.833 01:48:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:20.833 01:48:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:20.833 01:48:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:20.833 No valid GPT data, bailing 00:05:20.833 01:48:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:20.833 01:48:35 -- scripts/common.sh@391 -- # pt= 00:05:20.833 01:48:35 -- scripts/common.sh@392 -- # return 1 00:05:20.833 01:48:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:20.833 1+0 records in 00:05:20.833 1+0 records out 00:05:20.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503347 s, 208 MB/s 00:05:20.833 01:48:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.833 01:48:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:20.833 01:48:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:20.833 01:48:36 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:20.833 01:48:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:20.833 No valid GPT data, bailing 00:05:20.833 01:48:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:20.833 01:48:36 -- scripts/common.sh@391 -- # pt= 00:05:20.833 01:48:36 -- scripts/common.sh@392 -- # return 1 00:05:20.833 01:48:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:20.833 1+0 records in 00:05:20.833 1+0 records out 00:05:20.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457542 s, 229 MB/s 00:05:20.833 01:48:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.833 01:48:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:20.833 01:48:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:20.833 01:48:36 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:20.833 01:48:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:21.092 No valid GPT data, bailing 00:05:21.092 01:48:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:21.092 01:48:36 -- scripts/common.sh@391 -- # pt= 00:05:21.092 01:48:36 -- scripts/common.sh@392 -- # return 1 00:05:21.092 01:48:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:21.092 1+0 records in 00:05:21.092 1+0 records out 00:05:21.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036855 s, 285 MB/s 00:05:21.092 01:48:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:21.092 01:48:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:21.092 01:48:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:21.092 01:48:36 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:21.092 01:48:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:21.092 No valid GPT data, bailing 00:05:21.092 01:48:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:21.092 01:48:36 -- scripts/common.sh@391 -- # pt= 00:05:21.092 01:48:36 -- scripts/common.sh@392 -- # return 1 00:05:21.092 01:48:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:21.092 1+0 records in 00:05:21.092 1+0 records out 00:05:21.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412448 s, 254 MB/s 00:05:21.092 01:48:36 -- spdk/autotest.sh@118 -- # sync 00:05:21.092 01:48:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:21.092 01:48:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:21.092 01:48:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:22.994 01:48:38 -- spdk/autotest.sh@124 -- # uname -s 00:05:22.994 01:48:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:22.994 01:48:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:22.994 01:48:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.994 01:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.994 01:48:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.994 ************************************ 00:05:22.994 START TEST setup.sh 00:05:22.994 ************************************ 00:05:22.994 01:48:38 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:22.994 * Looking for test storage... 00:05:22.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.994 01:48:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:22.994 01:48:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:22.994 01:48:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:22.994 01:48:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.994 01:48:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.994 01:48:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.994 ************************************ 00:05:22.994 START TEST acl 00:05:22.994 ************************************ 00:05:22.994 01:48:38 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:23.252 * Looking for test storage... 00:05:23.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:23.252 01:48:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:23.252 01:48:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:23.252 01:48:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:23.252 01:48:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:23.252 01:48:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:23.252 01:48:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:23.252 01:48:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:23.252 01:48:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.253 01:48:38 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.819 01:48:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:23.819 01:48:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:23.819 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:23.819 01:48:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:23.819 01:48:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.819 01:48:39 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.385 Hugepages 00:05:24.385 node hugesize free / total 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.385 00:05:24.385 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:24.385 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:24.644 01:48:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:24.644 01:48:39 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.644 01:48:39 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.644 01:48:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:24.644 ************************************ 00:05:24.644 START TEST denied 00:05:24.644 ************************************ 00:05:24.644 01:48:39 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:24.644 01:48:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:24.644 01:48:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:24.644 01:48:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:24.644 01:48:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.644 01:48:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.577 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.577 01:48:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.142 00:05:26.142 real 0m1.392s 00:05:26.142 user 0m0.549s 00:05:26.142 sys 0m0.772s 00:05:26.142 01:48:41 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.142 01:48:41 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:26.142 ************************************ 00:05:26.142 END TEST denied 00:05:26.142 ************************************ 00:05:26.142 01:48:41 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:26.142 01:48:41 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.142 01:48:41 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.142 01:48:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:26.142 ************************************ 00:05:26.142 START TEST allowed 00:05:26.142 ************************************ 00:05:26.142 01:48:41 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:26.143 01:48:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:26.143 01:48:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:26.143 01:48:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:26.143 01:48:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.143 01:48:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.076 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.076 01:48:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.642 00:05:27.642 real 0m1.470s 00:05:27.642 user 0m0.660s 00:05:27.642 sys 0m0.796s 00:05:27.642 01:48:42 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.642 ************************************ 00:05:27.642 END TEST allowed 00:05:27.642 ************************************ 00:05:27.642 01:48:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:27.642 ************************************ 00:05:27.642 END TEST acl 00:05:27.642 ************************************ 00:05:27.642 00:05:27.642 real 0m4.597s 00:05:27.642 user 0m2.038s 00:05:27.642 sys 0m2.481s 00:05:27.642 01:48:42 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.642 01:48:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:27.642 01:48:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:27.642 01:48:42 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.642 01:48:42 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.642 01:48:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:27.642 ************************************ 00:05:27.642 START TEST hugepages 00:05:27.642 ************************************ 00:05:27.642 01:48:42 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:27.914 * Looking for test storage... 00:05:27.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 4520160 kB' 'MemAvailable: 7395828 kB' 'Buffers: 2436 kB' 'Cached: 3079256 kB' 'SwapCached: 0 kB' 'Active: 436516 kB' 'Inactive: 2750300 kB' 'Active(anon): 115616 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750300 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 106572 kB' 'Mapped: 48672 kB' 'Shmem: 10492 kB' 'KReclaimable: 83260 kB' 'Slab: 161896 kB' 'SReclaimable: 83260 kB' 'SUnreclaim: 78636 kB' 'KernelStack: 6620 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 337432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.914 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:27.915 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:27.916 01:48:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:27.916 01:48:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:27.916 01:48:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.916 01:48:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.916 01:48:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.916 ************************************ 00:05:27.916 START TEST default_setup 00:05:27.916 ************************************ 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.916 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.494 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.757 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.757 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6592664 kB' 'MemAvailable: 9468160 kB' 'Buffers: 2436 kB' 'Cached: 3079248 kB' 'SwapCached: 0 kB' 'Active: 453100 kB' 'Inactive: 2750308 kB' 'Active(anon): 132200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123328 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161592 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78688 kB' 'KernelStack: 6560 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.758 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6592664 kB' 'MemAvailable: 9468160 kB' 'Buffers: 2436 kB' 'Cached: 3079248 kB' 'SwapCached: 0 kB' 'Active: 452760 kB' 'Inactive: 2750308 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161588 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78684 kB' 'KernelStack: 6544 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.759 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.760 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6592664 kB' 'MemAvailable: 9468164 kB' 'Buffers: 2436 kB' 'Cached: 3079248 kB' 'SwapCached: 0 kB' 'Active: 452984 kB' 'Inactive: 2750312 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750312 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161568 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78664 kB' 'KernelStack: 6544 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.761 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.762 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:28.763 nr_hugepages=1024 00:05:28.763 resv_hugepages=0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.763 surplus_hugepages=0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.763 anon_hugepages=0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.763 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6592416 kB' 'MemAvailable: 9467916 kB' 'Buffers: 2436 kB' 'Cached: 3079248 kB' 'SwapCached: 0 kB' 'Active: 452992 kB' 'Inactive: 2750312 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750312 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123244 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161548 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78644 kB' 'KernelStack: 6528 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.764 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.765 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6592416 kB' 'MemUsed: 5649552 kB' 'SwapCached: 0 kB' 'Active: 452552 kB' 'Inactive: 2750316 kB' 'Active(anon): 131652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 3081688 kB' 'Mapped: 48664 kB' 'AnonPages: 123080 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82904 kB' 'Slab: 161544 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.766 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:28.767 node0=1024 expecting 1024 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:28.767 00:05:28.767 real 0m0.985s 00:05:28.767 user 0m0.458s 00:05:28.767 sys 0m0.457s 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.767 01:48:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:28.767 ************************************ 00:05:28.767 END TEST default_setup 00:05:28.767 ************************************ 00:05:28.767 01:48:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:28.767 01:48:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.767 01:48:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.767 01:48:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.025 ************************************ 00:05:29.026 START TEST per_node_1G_alloc 00:05:29.026 ************************************ 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.026 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.290 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.290 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7638484 kB' 'MemAvailable: 10513992 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453504 kB' 'Inactive: 2750320 kB' 'Active(anon): 132604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123728 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161596 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78692 kB' 'KernelStack: 6532 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.290 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.291 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7638484 kB' 'MemAvailable: 10513992 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452892 kB' 'Inactive: 2750320 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161640 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78736 kB' 'KernelStack: 6544 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.292 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.293 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7641572 kB' 'MemAvailable: 10517080 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452736 kB' 'Inactive: 2750320 kB' 'Active(anon): 131836 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161640 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78736 kB' 'KernelStack: 6544 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.294 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.295 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.296 nr_hugepages=512 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:29.296 resv_hugepages=0 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.296 surplus_hugepages=0 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.296 anon_hugepages=0 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7641572 kB' 'MemAvailable: 10517080 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452924 kB' 'Inactive: 2750320 kB' 'Active(anon): 132024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123128 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161628 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78724 kB' 'KernelStack: 6528 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.296 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.297 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7641572 kB' 'MemUsed: 4600396 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 2750320 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3081688 kB' 'Mapped: 48664 kB' 'AnonPages: 123128 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82904 kB' 'Slab: 161628 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.298 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.299 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.557 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.557 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.557 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.557 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.557 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:29.557 node0=512 expecting 512 00:05:29.557 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:29.557 00:05:29.558 real 0m0.528s 00:05:29.558 user 0m0.265s 00:05:29.558 sys 0m0.297s 00:05:29.558 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.558 01:48:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:29.558 ************************************ 00:05:29.558 END TEST per_node_1G_alloc 00:05:29.558 ************************************ 00:05:29.558 01:48:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:29.558 01:48:44 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.558 01:48:44 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.558 01:48:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.558 ************************************ 00:05:29.558 START TEST even_2G_alloc 00:05:29.558 ************************************ 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.558 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.820 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.820 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:29.820 01:48:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.820 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6591132 kB' 'MemAvailable: 9466640 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453892 kB' 'Inactive: 2750320 kB' 'Active(anon): 132992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123948 kB' 'Mapped: 49020 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161584 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78680 kB' 'KernelStack: 6628 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.821 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6591132 kB' 'MemAvailable: 9466640 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 2750320 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161604 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78700 kB' 'KernelStack: 6528 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.822 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.823 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6591132 kB' 'MemAvailable: 9466640 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452876 kB' 'Inactive: 2750320 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161604 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78700 kB' 'KernelStack: 6528 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.824 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.825 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.826 nr_hugepages=1024 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.826 resv_hugepages=0 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.826 surplus_hugepages=0 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.826 anon_hugepages=0 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.826 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6590880 kB' 'MemAvailable: 9466388 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452984 kB' 'Inactive: 2750320 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161596 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78692 kB' 'KernelStack: 6528 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.827 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:29.828 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6590880 kB' 'MemUsed: 5651088 kB' 'SwapCached: 0 kB' 'Active: 452872 kB' 'Inactive: 2750320 kB' 'Active(anon): 131972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3081688 kB' 'Mapped: 48664 kB' 'AnonPages: 123104 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82904 kB' 'Slab: 161596 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.088 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.089 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.090 node0=1024 expecting 1024 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:30.090 00:05:30.090 real 0m0.508s 00:05:30.090 user 0m0.266s 00:05:30.090 sys 0m0.273s 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.090 01:48:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:30.090 ************************************ 00:05:30.090 END TEST even_2G_alloc 00:05:30.090 ************************************ 00:05:30.090 01:48:45 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:30.090 01:48:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.090 01:48:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.090 01:48:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:30.090 ************************************ 00:05:30.090 START TEST odd_alloc 00:05:30.090 ************************************ 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.090 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.352 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.352 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6588972 kB' 'MemAvailable: 9464480 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453124 kB' 'Inactive: 2750320 kB' 'Active(anon): 132224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161708 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78804 kB' 'KernelStack: 6484 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.352 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.353 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6588972 kB' 'MemAvailable: 9464480 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453088 kB' 'Inactive: 2750320 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123304 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161708 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78804 kB' 'KernelStack: 6468 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.354 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.355 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6588972 kB' 'MemAvailable: 9464476 kB' 'Buffers: 2436 kB' 'Cached: 3079248 kB' 'SwapCached: 0 kB' 'Active: 453032 kB' 'Inactive: 2750316 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161692 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78788 kB' 'KernelStack: 6500 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.356 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.357 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:30.358 nr_hugepages=1025 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:30.358 resv_hugepages=0 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:30.358 surplus_hugepages=0 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:30.358 anon_hugepages=0 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:30.358 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6588972 kB' 'MemAvailable: 9464480 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 2750320 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161692 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78788 kB' 'KernelStack: 6544 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.619 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.620 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6588972 kB' 'MemUsed: 5652996 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 2750320 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 3081688 kB' 'Mapped: 48664 kB' 'AnonPages: 123092 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82904 kB' 'Slab: 161692 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.621 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:30.622 node0=1025 expecting 1025 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:30.622 00:05:30.622 real 0m0.516s 00:05:30.622 user 0m0.269s 00:05:30.622 sys 0m0.281s 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.622 01:48:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:30.622 ************************************ 00:05:30.622 END TEST odd_alloc 00:05:30.622 ************************************ 00:05:30.622 01:48:45 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:30.623 01:48:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.623 01:48:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.623 01:48:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:30.623 ************************************ 00:05:30.623 START TEST custom_alloc 00:05:30.623 ************************************ 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.623 01:48:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.884 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.884 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7643200 kB' 'MemAvailable: 10518708 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453644 kB' 'Inactive: 2750320 kB' 'Active(anon): 132744 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123964 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161680 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78776 kB' 'KernelStack: 6580 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.884 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.885 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7643200 kB' 'MemAvailable: 10518708 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453176 kB' 'Inactive: 2750320 kB' 'Active(anon): 132276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123448 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161676 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78772 kB' 'KernelStack: 6532 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.886 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.887 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.888 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7643200 kB' 'MemAvailable: 10518708 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452880 kB' 'Inactive: 2750320 kB' 'Active(anon): 131980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161708 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78804 kB' 'KernelStack: 6512 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.150 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.151 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:31.152 nr_hugepages=512 00:05:31.152 resv_hugepages=0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.152 surplus_hugepages=0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.152 anon_hugepages=0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7643200 kB' 'MemAvailable: 10518708 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453140 kB' 'Inactive: 2750320 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123348 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161708 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78804 kB' 'KernelStack: 6512 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.152 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.153 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7642948 kB' 'MemUsed: 4599020 kB' 'SwapCached: 0 kB' 'Active: 453020 kB' 'Inactive: 2750320 kB' 'Active(anon): 132120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 3081688 kB' 'Mapped: 48664 kB' 'AnonPages: 123224 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82904 kB' 'Slab: 161708 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.154 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:31.155 node0=512 expecting 512 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:31.155 00:05:31.155 real 0m0.526s 00:05:31.155 user 0m0.275s 00:05:31.155 sys 0m0.281s 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.155 01:48:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:31.155 ************************************ 00:05:31.155 END TEST custom_alloc 00:05:31.156 ************************************ 00:05:31.156 01:48:46 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:31.156 01:48:46 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.156 01:48:46 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.156 01:48:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:31.156 ************************************ 00:05:31.156 START TEST no_shrink_alloc 00:05:31.156 ************************************ 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.156 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.416 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.416 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.416 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:31.416 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.416 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.416 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.416 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.416 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6587696 kB' 'MemAvailable: 9463204 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 453340 kB' 'Inactive: 2750320 kB' 'Active(anon): 132440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123584 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161692 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78788 kB' 'KernelStack: 6500 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.417 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.418 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.418 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.418 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.418 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.681 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.681 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6587696 kB' 'MemAvailable: 9463204 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452940 kB' 'Inactive: 2750320 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123152 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161708 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78804 kB' 'KernelStack: 6528 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.682 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.683 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6587696 kB' 'MemAvailable: 9463204 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452940 kB' 'Inactive: 2750320 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123148 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161704 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78800 kB' 'KernelStack: 6528 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.684 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.685 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.686 nr_hugepages=1024 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:31.686 resv_hugepages=0 00:05:31.686 surplus_hugepages=0 00:05:31.686 anon_hugepages=0 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.686 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6587696 kB' 'MemAvailable: 9463204 kB' 'Buffers: 2436 kB' 'Cached: 3079252 kB' 'SwapCached: 0 kB' 'Active: 452944 kB' 'Inactive: 2750320 kB' 'Active(anon): 132044 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123156 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 82904 kB' 'Slab: 161692 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78788 kB' 'KernelStack: 6528 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.687 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.688 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6587696 kB' 'MemUsed: 5654272 kB' 'SwapCached: 0 kB' 'Active: 452992 kB' 'Inactive: 2750320 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750320 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 3081688 kB' 'Mapped: 48664 kB' 'AnonPages: 123200 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82904 kB' 'Slab: 161692 kB' 'SReclaimable: 82904 kB' 'SUnreclaim: 78788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.689 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:31.690 node0=1024 expecting 1024 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.690 01:48:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.949 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.949 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.949 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.949 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6580688 kB' 'MemAvailable: 9456200 kB' 'Buffers: 2436 kB' 'Cached: 3079256 kB' 'SwapCached: 0 kB' 'Active: 448528 kB' 'Inactive: 2750324 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118528 kB' 'Mapped: 48084 kB' 'Shmem: 10468 kB' 'KReclaimable: 82900 kB' 'Slab: 161404 kB' 'SReclaimable: 82900 kB' 'SUnreclaim: 78504 kB' 'KernelStack: 6488 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.214 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.215 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6580716 kB' 'MemAvailable: 9456228 kB' 'Buffers: 2436 kB' 'Cached: 3079256 kB' 'SwapCached: 0 kB' 'Active: 448284 kB' 'Inactive: 2750324 kB' 'Active(anon): 127384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118528 kB' 'Mapped: 47924 kB' 'Shmem: 10468 kB' 'KReclaimable: 82900 kB' 'Slab: 161404 kB' 'SReclaimable: 82900 kB' 'SUnreclaim: 78504 kB' 'KernelStack: 6432 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.216 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.217 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6581144 kB' 'MemAvailable: 9456656 kB' 'Buffers: 2436 kB' 'Cached: 3079256 kB' 'SwapCached: 0 kB' 'Active: 448148 kB' 'Inactive: 2750324 kB' 'Active(anon): 127248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118408 kB' 'Mapped: 47924 kB' 'Shmem: 10468 kB' 'KReclaimable: 82900 kB' 'Slab: 161404 kB' 'SReclaimable: 82900 kB' 'SUnreclaim: 78504 kB' 'KernelStack: 6432 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.218 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.219 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.220 nr_hugepages=1024 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.220 resv_hugepages=0 00:05:32.220 surplus_hugepages=0 00:05:32.220 anon_hugepages=0 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6581144 kB' 'MemAvailable: 9456656 kB' 'Buffers: 2436 kB' 'Cached: 3079256 kB' 'SwapCached: 0 kB' 'Active: 448084 kB' 'Inactive: 2750324 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118292 kB' 'Mapped: 47924 kB' 'Shmem: 10468 kB' 'KReclaimable: 82900 kB' 'Slab: 161404 kB' 'SReclaimable: 82900 kB' 'SUnreclaim: 78504 kB' 'KernelStack: 6416 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 6146048 kB' 'DirectMap1G: 8388608 kB' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.220 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.221 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6581144 kB' 'MemUsed: 5660824 kB' 'SwapCached: 0 kB' 'Active: 448172 kB' 'Inactive: 2750324 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2750324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 3081692 kB' 'Mapped: 47924 kB' 'AnonPages: 118416 kB' 'Shmem: 10468 kB' 'KernelStack: 6432 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82900 kB' 'Slab: 161404 kB' 'SReclaimable: 82900 kB' 'SUnreclaim: 78504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.222 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:32.223 node0=1024 expecting 1024 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:32.223 ************************************ 00:05:32.223 END TEST no_shrink_alloc 00:05:32.223 ************************************ 00:05:32.223 00:05:32.223 real 0m1.136s 00:05:32.223 user 0m0.578s 00:05:32.223 sys 0m0.567s 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.223 01:48:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:32.223 01:48:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:32.482 ************************************ 00:05:32.482 END TEST hugepages 00:05:32.482 ************************************ 00:05:32.482 00:05:32.482 real 0m4.616s 00:05:32.482 user 0m2.262s 00:05:32.482 sys 0m2.403s 00:05:32.482 01:48:47 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.482 01:48:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.482 01:48:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:32.482 01:48:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.482 01:48:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.482 01:48:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:32.482 ************************************ 00:05:32.482 START TEST driver 00:05:32.482 ************************************ 00:05:32.482 01:48:47 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:32.482 * Looking for test storage... 00:05:32.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:32.482 01:48:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:32.482 01:48:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.482 01:48:47 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.050 01:48:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:33.050 01:48:48 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.050 01:48:48 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.050 01:48:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 ************************************ 00:05:33.050 START TEST guess_driver 00:05:33.050 ************************************ 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:33.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:33.050 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:33.051 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:33.051 Looking for driver=uio_pci_generic 00:05:33.051 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:33.051 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:33.051 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.051 01:48:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.051 01:48:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.618 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:33.618 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:33.618 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.618 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.618 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:33.618 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.878 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.878 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:33.878 01:48:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.878 01:48:49 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:33.878 01:48:49 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:33.878 01:48:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.878 01:48:49 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.446 00:05:34.446 real 0m1.382s 00:05:34.446 user 0m0.532s 00:05:34.446 sys 0m0.853s 00:05:34.446 01:48:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.446 ************************************ 00:05:34.446 END TEST guess_driver 00:05:34.446 ************************************ 00:05:34.446 01:48:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:34.446 ************************************ 00:05:34.446 END TEST driver 00:05:34.446 ************************************ 00:05:34.446 00:05:34.446 real 0m2.052s 00:05:34.446 user 0m0.759s 00:05:34.446 sys 0m1.349s 00:05:34.446 01:48:49 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.446 01:48:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:34.446 01:48:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:34.446 01:48:49 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.446 01:48:49 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.446 01:48:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:34.446 ************************************ 00:05:34.446 START TEST devices 00:05:34.446 ************************************ 00:05:34.446 01:48:49 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:34.446 * Looking for test storage... 00:05:34.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:34.446 01:48:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:34.446 01:48:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:34.446 01:48:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.446 01:48:49 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:35.384 01:48:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:35.384 No valid GPT data, bailing 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:35.384 01:48:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:35.384 01:48:50 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:35.384 No valid GPT data, bailing 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:35.384 01:48:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:35.384 01:48:50 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:35.384 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:35.384 No valid GPT data, bailing 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:35.384 01:48:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:35.385 01:48:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:35.385 01:48:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:35.385 01:48:50 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:35.385 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:35.385 01:48:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:35.385 01:48:50 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:35.644 No valid GPT data, bailing 00:05:35.644 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:35.644 01:48:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:35.644 01:48:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:35.644 01:48:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:35.644 01:48:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:35.644 01:48:50 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:35.644 01:48:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:35.644 01:48:50 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.644 01:48:50 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.644 01:48:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.644 ************************************ 00:05:35.644 START TEST nvme_mount 00:05:35.644 ************************************ 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:35.644 01:48:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:36.580 Creating new GPT entries in memory. 00:05:36.580 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:36.580 other utilities. 00:05:36.580 01:48:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:36.580 01:48:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.580 01:48:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.580 01:48:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.580 01:48:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:37.997 Creating new GPT entries in memory. 00:05:37.997 The operation has completed successfully. 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 70536 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.997 01:48:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:37.997 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:38.277 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.277 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:38.535 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:38.535 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:38.535 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:38.535 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.536 01:48:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.795 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.795 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:38.795 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:38.795 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.795 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.795 01:48:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.795 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:38.795 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.054 01:48:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:39.313 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.313 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:39.313 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:39.313 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.313 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.313 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.572 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:39.572 00:05:39.572 real 0m4.008s 00:05:39.572 user 0m0.721s 00:05:39.572 sys 0m1.022s 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.572 ************************************ 00:05:39.572 END TEST nvme_mount 00:05:39.572 ************************************ 00:05:39.572 01:48:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:39.572 01:48:54 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:39.572 01:48:54 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.572 01:48:54 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.572 01:48:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.572 ************************************ 00:05:39.572 START TEST dm_mount 00:05:39.572 ************************************ 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:39.572 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.573 01:48:54 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:40.951 Creating new GPT entries in memory. 00:05:40.951 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.951 other utilities. 00:05:40.951 01:48:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.951 01:48:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.951 01:48:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.951 01:48:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.951 01:48:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:41.888 Creating new GPT entries in memory. 00:05:41.888 The operation has completed successfully. 00:05:41.889 01:48:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.889 01:48:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.889 01:48:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.889 01:48:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.889 01:48:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:42.826 The operation has completed successfully. 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 70972 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.826 01:48:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.086 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:43.345 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.346 01:48:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.604 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.604 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:43.604 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:43.604 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.604 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.604 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.605 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.605 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.863 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.863 01:48:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.863 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.863 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:43.863 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:43.863 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:43.863 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:43.863 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:43.864 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:43.864 00:05:43.864 real 0m4.238s 00:05:43.864 user 0m0.514s 00:05:43.864 sys 0m0.695s 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.864 ************************************ 00:05:43.864 END TEST dm_mount 00:05:43.864 ************************************ 00:05:43.864 01:48:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.864 01:48:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.123 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:44.123 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:44.123 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:44.123 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.123 01:48:59 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:44.123 00:05:44.123 real 0m9.748s 00:05:44.123 user 0m1.860s 00:05:44.123 sys 0m2.290s 00:05:44.123 01:48:59 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.123 01:48:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:44.123 ************************************ 00:05:44.123 END TEST devices 00:05:44.123 ************************************ 00:05:44.382 00:05:44.382 real 0m21.292s 00:05:44.382 user 0m7.014s 00:05:44.382 sys 0m8.693s 00:05:44.382 01:48:59 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.382 01:48:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:44.382 ************************************ 00:05:44.382 END TEST setup.sh 00:05:44.382 ************************************ 00:05:44.382 01:48:59 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:44.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.950 Hugepages 00:05:44.950 node hugesize free / total 00:05:44.950 node0 1048576kB 0 / 0 00:05:44.950 node0 2048kB 2048 / 2048 00:05:44.950 00:05:44.950 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.950 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:45.209 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:45.209 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:45.209 01:49:00 -- spdk/autotest.sh@130 -- # uname -s 00:05:45.209 01:49:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:45.209 01:49:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:45.209 01:49:00 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.036 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.036 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.036 01:49:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:47.032 01:49:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:47.032 01:49:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:47.032 01:49:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.032 01:49:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:47.032 01:49:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:47.032 01:49:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:47.032 01:49:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.032 01:49:02 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.032 01:49:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:47.291 01:49:02 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:47.291 01:49:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:47.291 01:49:02 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.550 Waiting for block devices as requested 00:05:47.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.550 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.809 01:49:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:47.809 01:49:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:47.809 01:49:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:47.809 01:49:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:47.809 01:49:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:47.809 01:49:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1557 -- # continue 00:05:47.809 01:49:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:47.809 01:49:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:47.809 01:49:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:47.809 01:49:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:47.809 01:49:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:47.809 01:49:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:47.809 01:49:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:47.809 01:49:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:47.809 01:49:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:47.809 01:49:02 -- common/autotest_common.sh@1557 -- # continue 00:05:47.809 01:49:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:47.809 01:49:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.809 01:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.809 01:49:03 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:47.809 01:49:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.809 01:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:47.809 01:49:03 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.636 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.636 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.636 01:49:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:48.636 01:49:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.636 01:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.636 01:49:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:48.636 01:49:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:48.636 01:49:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.636 01:49:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:48.636 01:49:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:48.636 01:49:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:48.636 01:49:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:48.636 01:49:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:48.636 01:49:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.636 01:49:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.636 01:49:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:48.895 01:49:03 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:48.895 01:49:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:48.895 01:49:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:48.895 01:49:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:48.895 01:49:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:48.895 01:49:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:48.895 01:49:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:48.895 01:49:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:48.895 01:49:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:48.895 01:49:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:48.895 01:49:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:48.895 01:49:03 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:48.895 01:49:03 -- common/autotest_common.sh@1593 -- # return 0 00:05:48.895 01:49:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:48.895 01:49:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:48.895 01:49:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:48.895 01:49:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:48.895 01:49:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:48.895 01:49:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.895 01:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.895 01:49:03 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:48.895 01:49:03 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:48.895 01:49:03 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:48.895 01:49:03 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:48.895 01:49:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.895 01:49:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.895 01:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.895 ************************************ 00:05:48.895 START TEST env 00:05:48.895 ************************************ 00:05:48.895 01:49:03 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:48.895 * Looking for test storage... 00:05:48.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:48.895 01:49:04 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:48.895 01:49:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.895 01:49:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.895 01:49:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.895 ************************************ 00:05:48.895 START TEST env_memory 00:05:48.895 ************************************ 00:05:48.895 01:49:04 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:48.895 00:05:48.895 00:05:48.895 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.895 http://cunit.sourceforge.net/ 00:05:48.895 00:05:48.895 00:05:48.895 Suite: memory 00:05:48.895 Test: alloc and free memory map ...[2024-07-25 01:49:04.120631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:48.895 passed 00:05:48.895 Test: mem map translation ...[2024-07-25 01:49:04.151234] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:48.895 [2024-07-25 01:49:04.151272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:48.895 [2024-07-25 01:49:04.151328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:48.895 [2024-07-25 01:49:04.151339] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:49.155 passed 00:05:49.155 Test: mem map registration ...[2024-07-25 01:49:04.215075] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:49.155 [2024-07-25 01:49:04.215107] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:49.155 passed 00:05:49.155 Test: mem map adjacent registrations ...passed 00:05:49.155 00:05:49.155 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.155 suites 1 1 n/a 0 0 00:05:49.155 tests 4 4 4 0 0 00:05:49.155 asserts 152 152 152 0 n/a 00:05:49.155 00:05:49.155 Elapsed time = 0.213 seconds 00:05:49.155 00:05:49.155 real 0m0.226s 00:05:49.155 user 0m0.212s 00:05:49.155 sys 0m0.011s 00:05:49.155 01:49:04 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.155 01:49:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:49.155 ************************************ 00:05:49.155 END TEST env_memory 00:05:49.155 ************************************ 00:05:49.155 01:49:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:49.155 01:49:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.155 01:49:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.155 01:49:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.155 ************************************ 00:05:49.155 START TEST env_vtophys 00:05:49.155 ************************************ 00:05:49.155 01:49:04 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:49.155 EAL: lib.eal log level changed from notice to debug 00:05:49.155 EAL: Detected lcore 0 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 1 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 2 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 3 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 4 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 5 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 6 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 7 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 8 as core 0 on socket 0 00:05:49.155 EAL: Detected lcore 9 as core 0 on socket 0 00:05:49.155 EAL: Maximum logical cores by configuration: 128 00:05:49.155 EAL: Detected CPU lcores: 10 00:05:49.155 EAL: Detected NUMA nodes: 1 00:05:49.155 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:49.155 EAL: Detected shared linkage of DPDK 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:05:49.155 EAL: Registered [vdev] bus. 00:05:49.155 EAL: bus.vdev log level changed from disabled to notice 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:05:49.155 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:49.155 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:05:49.155 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:05:49.155 EAL: No shared files mode enabled, IPC will be disabled 00:05:49.155 EAL: No shared files mode enabled, IPC is disabled 00:05:49.155 EAL: Selected IOVA mode 'PA' 00:05:49.155 EAL: Probing VFIO support... 00:05:49.155 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:49.155 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:49.155 EAL: Ask a virtual area of 0x2e000 bytes 00:05:49.155 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:49.155 EAL: Setting up physically contiguous memory... 00:05:49.155 EAL: Setting maximum number of open files to 524288 00:05:49.155 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:49.155 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:49.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.155 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:49.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.155 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:49.155 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:49.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.155 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:49.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.155 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:49.155 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:49.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.155 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:49.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.155 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:49.155 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:49.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.155 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:49.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.155 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:49.155 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:49.155 EAL: Hugepages will be freed exactly as allocated. 00:05:49.155 EAL: No shared files mode enabled, IPC is disabled 00:05:49.155 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: TSC frequency is ~2200000 KHz 00:05:49.415 EAL: Main lcore 0 is ready (tid=7fa3c1106a00;cpuset=[0]) 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 0 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 2MB 00:05:49.415 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Mem event callback 'spdk:(nil)' registered 00:05:49.415 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:49.415 00:05:49.415 00:05:49.415 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.415 http://cunit.sourceforge.net/ 00:05:49.415 00:05:49.415 00:05:49.415 Suite: components_suite 00:05:49.415 Test: vtophys_malloc_test ...passed 00:05:49.415 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 4MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 4MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 6MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 6MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 10MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 10MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 18MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 18MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 34MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 34MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 66MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 66MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 130MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was shrunk by 130MB 00:05:49.415 EAL: Trying to obtain current memory policy. 00:05:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.415 EAL: Restoring previous memory policy: 4 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.415 EAL: request: mp_malloc_sync 00:05:49.415 EAL: No shared files mode enabled, IPC is disabled 00:05:49.415 EAL: Heap on socket 0 was expanded by 258MB 00:05:49.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.673 EAL: request: mp_malloc_sync 00:05:49.673 EAL: No shared files mode enabled, IPC is disabled 00:05:49.673 EAL: Heap on socket 0 was shrunk by 258MB 00:05:49.673 EAL: Trying to obtain current memory policy. 00:05:49.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.673 EAL: Restoring previous memory policy: 4 00:05:49.673 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.673 EAL: request: mp_malloc_sync 00:05:49.673 EAL: No shared files mode enabled, IPC is disabled 00:05:49.673 EAL: Heap on socket 0 was expanded by 514MB 00:05:49.673 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.673 EAL: request: mp_malloc_sync 00:05:49.673 EAL: No shared files mode enabled, IPC is disabled 00:05:49.673 EAL: Heap on socket 0 was shrunk by 514MB 00:05:49.673 EAL: Trying to obtain current memory policy. 00:05:49.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.931 EAL: Restoring previous memory policy: 4 00:05:49.931 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.931 EAL: request: mp_malloc_sync 00:05:49.931 EAL: No shared files mode enabled, IPC is disabled 00:05:49.931 EAL: Heap on socket 0 was expanded by 1026MB 00:05:49.931 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.189 passed 00:05:50.189 00:05:50.189 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.189 suites 1 1 n/a 0 0 00:05:50.189 tests 2 2 2 0 0 00:05:50.189 asserts 5197 5197 5197 0 n/a 00:05:50.189 00:05:50.189 Elapsed time = 0.728 seconds 00:05:50.189 EAL: request: mp_malloc_sync 00:05:50.189 EAL: No shared files mode enabled, IPC is disabled 00:05:50.189 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:50.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.189 EAL: request: mp_malloc_sync 00:05:50.189 EAL: No shared files mode enabled, IPC is disabled 00:05:50.189 EAL: Heap on socket 0 was shrunk by 2MB 00:05:50.189 EAL: No shared files mode enabled, IPC is disabled 00:05:50.189 EAL: No shared files mode enabled, IPC is disabled 00:05:50.189 EAL: No shared files mode enabled, IPC is disabled 00:05:50.189 00:05:50.189 real 0m0.926s 00:05:50.189 user 0m0.471s 00:05:50.189 sys 0m0.322s 00:05:50.190 01:49:05 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.190 01:49:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:50.190 ************************************ 00:05:50.190 END TEST env_vtophys 00:05:50.190 ************************************ 00:05:50.190 01:49:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:50.190 01:49:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.190 01:49:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.190 01:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.190 ************************************ 00:05:50.190 START TEST env_pci 00:05:50.190 ************************************ 00:05:50.190 01:49:05 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:50.190 00:05:50.190 00:05:50.190 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.190 http://cunit.sourceforge.net/ 00:05:50.190 00:05:50.190 00:05:50.190 Suite: pci 00:05:50.190 Test: pci_hook ...[2024-07-25 01:49:05.344934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72160 has claimed it 00:05:50.190 passed 00:05:50.190 00:05:50.190 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.190 suites 1 1 n/a 0 0 00:05:50.190 tests 1 1 1 0 0 00:05:50.190 asserts 25 25 25 0 n/a 00:05:50.190 00:05:50.190 Elapsed time = 0.002 seconds 00:05:50.190 EAL: Cannot find device (10000:00:01.0) 00:05:50.190 EAL: Failed to attach device on primary process 00:05:50.190 00:05:50.190 real 0m0.019s 00:05:50.190 user 0m0.012s 00:05:50.190 sys 0m0.007s 00:05:50.190 01:49:05 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.190 01:49:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:50.190 ************************************ 00:05:50.190 END TEST env_pci 00:05:50.190 ************************************ 00:05:50.190 01:49:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:50.190 01:49:05 env -- env/env.sh@15 -- # uname 00:05:50.190 01:49:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:50.190 01:49:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:50.190 01:49:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:50.190 01:49:05 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:50.190 01:49:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.190 01:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.190 ************************************ 00:05:50.190 START TEST env_dpdk_post_init 00:05:50.190 ************************************ 00:05:50.190 01:49:05 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:50.190 EAL: Detected CPU lcores: 10 00:05:50.190 EAL: Detected NUMA nodes: 1 00:05:50.190 EAL: Detected shared linkage of DPDK 00:05:50.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:50.190 EAL: Selected IOVA mode 'PA' 00:05:50.448 Starting DPDK initialization... 00:05:50.448 Starting SPDK post initialization... 00:05:50.448 SPDK NVMe probe 00:05:50.448 Attaching to 0000:00:10.0 00:05:50.448 Attaching to 0000:00:11.0 00:05:50.448 Attached to 0000:00:10.0 00:05:50.448 Attached to 0000:00:11.0 00:05:50.448 Cleaning up... 00:05:50.448 00:05:50.448 real 0m0.181s 00:05:50.448 user 0m0.046s 00:05:50.448 sys 0m0.036s 00:05:50.448 01:49:05 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.448 01:49:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.448 ************************************ 00:05:50.448 END TEST env_dpdk_post_init 00:05:50.448 ************************************ 00:05:50.448 01:49:05 env -- env/env.sh@26 -- # uname 00:05:50.448 01:49:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:50.448 01:49:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:50.448 01:49:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.448 01:49:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.448 01:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.448 ************************************ 00:05:50.448 START TEST env_mem_callbacks 00:05:50.448 ************************************ 00:05:50.448 01:49:05 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:50.448 EAL: Detected CPU lcores: 10 00:05:50.448 EAL: Detected NUMA nodes: 1 00:05:50.448 EAL: Detected shared linkage of DPDK 00:05:50.448 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:50.448 EAL: Selected IOVA mode 'PA' 00:05:50.706 00:05:50.706 00:05:50.706 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.707 http://cunit.sourceforge.net/ 00:05:50.707 00:05:50.707 00:05:50.707 Suite: memory 00:05:50.707 Test: test ... 00:05:50.707 register 0x200000200000 2097152 00:05:50.707 malloc 3145728 00:05:50.707 register 0x200000400000 4194304 00:05:50.707 buf 0x200000500000 len 3145728 PASSED 00:05:50.707 malloc 64 00:05:50.707 buf 0x2000004fff40 len 64 PASSED 00:05:50.707 malloc 4194304 00:05:50.707 register 0x200000800000 6291456 00:05:50.707 buf 0x200000a00000 len 4194304 PASSED 00:05:50.707 free 0x200000500000 3145728 00:05:50.707 free 0x2000004fff40 64 00:05:50.707 unregister 0x200000400000 4194304 PASSED 00:05:50.707 free 0x200000a00000 4194304 00:05:50.707 unregister 0x200000800000 6291456 PASSED 00:05:50.707 malloc 8388608 00:05:50.707 register 0x200000400000 10485760 00:05:50.707 buf 0x200000600000 len 8388608 PASSED 00:05:50.707 free 0x200000600000 8388608 00:05:50.707 unregister 0x200000400000 10485760 PASSED 00:05:50.707 passed 00:05:50.707 00:05:50.707 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.707 suites 1 1 n/a 0 0 00:05:50.707 tests 1 1 1 0 0 00:05:50.707 asserts 15 15 15 0 n/a 00:05:50.707 00:05:50.707 Elapsed time = 0.008 seconds 00:05:50.707 00:05:50.707 real 0m0.142s 00:05:50.707 user 0m0.018s 00:05:50.707 sys 0m0.022s 00:05:50.707 01:49:05 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.707 01:49:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:50.707 ************************************ 00:05:50.707 END TEST env_mem_callbacks 00:05:50.707 ************************************ 00:05:50.707 00:05:50.707 real 0m1.842s 00:05:50.707 user 0m0.866s 00:05:50.707 sys 0m0.619s 00:05:50.707 01:49:05 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.707 01:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.707 ************************************ 00:05:50.707 END TEST env 00:05:50.707 ************************************ 00:05:50.707 01:49:05 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:50.707 01:49:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.707 01:49:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.707 01:49:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.707 ************************************ 00:05:50.707 START TEST rpc 00:05:50.707 ************************************ 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:50.707 * Looking for test storage... 00:05:50.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.707 01:49:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=72269 00:05:50.707 01:49:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.707 01:49:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:50.707 01:49:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 72269 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@831 -- # '[' -z 72269 ']' 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.707 01:49:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.965 [2024-07-25 01:49:06.032920] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:50.965 [2024-07-25 01:49:06.033009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72269 ] 00:05:50.965 [2024-07-25 01:49:06.155302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.965 [2024-07-25 01:49:06.173156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.965 [2024-07-25 01:49:06.206533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:50.965 [2024-07-25 01:49:06.206596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72269' to capture a snapshot of events at runtime. 00:05:50.965 [2024-07-25 01:49:06.206620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.965 [2024-07-25 01:49:06.206627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.965 [2024-07-25 01:49:06.206633] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72269 for offline analysis/debug. 00:05:50.965 [2024-07-25 01:49:06.206660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.965 [2024-07-25 01:49:06.233165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.224 01:49:06 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.224 01:49:06 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.224 01:49:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.224 01:49:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.224 01:49:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:51.224 01:49:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:51.224 01:49:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.224 01:49:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.224 01:49:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 ************************************ 00:05:51.224 START TEST rpc_integrity 00:05:51.224 ************************************ 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.224 { 00:05:51.224 "name": "Malloc0", 00:05:51.224 "aliases": [ 00:05:51.224 "013f621e-4a51-4609-be5b-7db9d1569b26" 00:05:51.224 ], 00:05:51.224 "product_name": "Malloc disk", 00:05:51.224 "block_size": 512, 00:05:51.224 "num_blocks": 16384, 00:05:51.224 "uuid": "013f621e-4a51-4609-be5b-7db9d1569b26", 00:05:51.224 "assigned_rate_limits": { 00:05:51.224 "rw_ios_per_sec": 0, 00:05:51.224 "rw_mbytes_per_sec": 0, 00:05:51.224 "r_mbytes_per_sec": 0, 00:05:51.224 "w_mbytes_per_sec": 0 00:05:51.224 }, 00:05:51.224 "claimed": false, 00:05:51.224 "zoned": false, 00:05:51.224 "supported_io_types": { 00:05:51.224 "read": true, 00:05:51.224 "write": true, 00:05:51.224 "unmap": true, 00:05:51.224 "flush": true, 00:05:51.224 "reset": true, 00:05:51.224 "nvme_admin": false, 00:05:51.224 "nvme_io": false, 00:05:51.224 "nvme_io_md": false, 00:05:51.224 "write_zeroes": true, 00:05:51.224 "zcopy": true, 00:05:51.224 "get_zone_info": false, 00:05:51.224 "zone_management": false, 00:05:51.224 "zone_append": false, 00:05:51.224 "compare": false, 00:05:51.224 "compare_and_write": false, 00:05:51.224 "abort": true, 00:05:51.224 "seek_hole": false, 00:05:51.224 "seek_data": false, 00:05:51.224 "copy": true, 00:05:51.224 "nvme_iov_md": false 00:05:51.224 }, 00:05:51.224 "memory_domains": [ 00:05:51.224 { 00:05:51.224 "dma_device_id": "system", 00:05:51.224 "dma_device_type": 1 00:05:51.224 }, 00:05:51.224 { 00:05:51.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.224 "dma_device_type": 2 00:05:51.224 } 00:05:51.224 ], 00:05:51.224 "driver_specific": {} 00:05:51.224 } 00:05:51.224 ]' 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.224 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.224 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 [2024-07-25 01:49:06.520046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:51.224 [2024-07-25 01:49:06.520094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.224 [2024-07-25 01:49:06.520111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ea4a80 00:05:51.224 [2024-07-25 01:49:06.520120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.224 [2024-07-25 01:49:06.521662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.224 [2024-07-25 01:49:06.521692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.482 Passthru0 00:05:51.482 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.482 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.482 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.482 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.482 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.482 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.482 { 00:05:51.482 "name": "Malloc0", 00:05:51.482 "aliases": [ 00:05:51.482 "013f621e-4a51-4609-be5b-7db9d1569b26" 00:05:51.482 ], 00:05:51.482 "product_name": "Malloc disk", 00:05:51.482 "block_size": 512, 00:05:51.482 "num_blocks": 16384, 00:05:51.482 "uuid": "013f621e-4a51-4609-be5b-7db9d1569b26", 00:05:51.482 "assigned_rate_limits": { 00:05:51.482 "rw_ios_per_sec": 0, 00:05:51.482 "rw_mbytes_per_sec": 0, 00:05:51.482 "r_mbytes_per_sec": 0, 00:05:51.482 "w_mbytes_per_sec": 0 00:05:51.482 }, 00:05:51.482 "claimed": true, 00:05:51.482 "claim_type": "exclusive_write", 00:05:51.482 "zoned": false, 00:05:51.482 "supported_io_types": { 00:05:51.482 "read": true, 00:05:51.482 "write": true, 00:05:51.483 "unmap": true, 00:05:51.483 "flush": true, 00:05:51.483 "reset": true, 00:05:51.483 "nvme_admin": false, 00:05:51.483 "nvme_io": false, 00:05:51.483 "nvme_io_md": false, 00:05:51.483 "write_zeroes": true, 00:05:51.483 "zcopy": true, 00:05:51.483 "get_zone_info": false, 00:05:51.483 "zone_management": false, 00:05:51.483 "zone_append": false, 00:05:51.483 "compare": false, 00:05:51.483 "compare_and_write": false, 00:05:51.483 "abort": true, 00:05:51.483 "seek_hole": false, 00:05:51.483 "seek_data": false, 00:05:51.483 "copy": true, 00:05:51.483 "nvme_iov_md": false 00:05:51.483 }, 00:05:51.483 "memory_domains": [ 00:05:51.483 { 00:05:51.483 "dma_device_id": "system", 00:05:51.483 "dma_device_type": 1 00:05:51.483 }, 00:05:51.483 { 00:05:51.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.483 "dma_device_type": 2 00:05:51.483 } 00:05:51.483 ], 00:05:51.483 "driver_specific": {} 00:05:51.483 }, 00:05:51.483 { 00:05:51.483 "name": "Passthru0", 00:05:51.483 "aliases": [ 00:05:51.483 "f00208b8-e809-5ad8-9530-f07773248770" 00:05:51.483 ], 00:05:51.483 "product_name": "passthru", 00:05:51.483 "block_size": 512, 00:05:51.483 "num_blocks": 16384, 00:05:51.483 "uuid": "f00208b8-e809-5ad8-9530-f07773248770", 00:05:51.483 "assigned_rate_limits": { 00:05:51.483 "rw_ios_per_sec": 0, 00:05:51.483 "rw_mbytes_per_sec": 0, 00:05:51.483 "r_mbytes_per_sec": 0, 00:05:51.483 "w_mbytes_per_sec": 0 00:05:51.483 }, 00:05:51.483 "claimed": false, 00:05:51.483 "zoned": false, 00:05:51.483 "supported_io_types": { 00:05:51.483 "read": true, 00:05:51.483 "write": true, 00:05:51.483 "unmap": true, 00:05:51.483 "flush": true, 00:05:51.483 "reset": true, 00:05:51.483 "nvme_admin": false, 00:05:51.483 "nvme_io": false, 00:05:51.483 "nvme_io_md": false, 00:05:51.483 "write_zeroes": true, 00:05:51.483 "zcopy": true, 00:05:51.483 "get_zone_info": false, 00:05:51.483 "zone_management": false, 00:05:51.483 "zone_append": false, 00:05:51.483 "compare": false, 00:05:51.483 "compare_and_write": false, 00:05:51.483 "abort": true, 00:05:51.483 "seek_hole": false, 00:05:51.483 "seek_data": false, 00:05:51.483 "copy": true, 00:05:51.483 "nvme_iov_md": false 00:05:51.483 }, 00:05:51.483 "memory_domains": [ 00:05:51.483 { 00:05:51.483 "dma_device_id": "system", 00:05:51.483 "dma_device_type": 1 00:05:51.483 }, 00:05:51.483 { 00:05:51.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.483 "dma_device_type": 2 00:05:51.483 } 00:05:51.483 ], 00:05:51.483 "driver_specific": { 00:05:51.483 "passthru": { 00:05:51.483 "name": "Passthru0", 00:05:51.483 "base_bdev_name": "Malloc0" 00:05:51.483 } 00:05:51.483 } 00:05:51.483 } 00:05:51.483 ]' 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.483 01:49:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.483 00:05:51.483 real 0m0.336s 00:05:51.483 user 0m0.219s 00:05:51.483 sys 0m0.040s 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.483 01:49:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 ************************************ 00:05:51.483 END TEST rpc_integrity 00:05:51.483 ************************************ 00:05:51.483 01:49:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:51.483 01:49:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.483 01:49:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.483 01:49:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 ************************************ 00:05:51.483 START TEST rpc_plugins 00:05:51.483 ************************************ 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:51.483 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.483 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:51.483 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.483 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.741 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:51.741 { 00:05:51.741 "name": "Malloc1", 00:05:51.741 "aliases": [ 00:05:51.741 "bffb13d5-9f21-402a-9fe8-518e3a12201e" 00:05:51.741 ], 00:05:51.741 "product_name": "Malloc disk", 00:05:51.741 "block_size": 4096, 00:05:51.741 "num_blocks": 256, 00:05:51.741 "uuid": "bffb13d5-9f21-402a-9fe8-518e3a12201e", 00:05:51.741 "assigned_rate_limits": { 00:05:51.741 "rw_ios_per_sec": 0, 00:05:51.741 "rw_mbytes_per_sec": 0, 00:05:51.741 "r_mbytes_per_sec": 0, 00:05:51.741 "w_mbytes_per_sec": 0 00:05:51.741 }, 00:05:51.741 "claimed": false, 00:05:51.741 "zoned": false, 00:05:51.741 "supported_io_types": { 00:05:51.741 "read": true, 00:05:51.741 "write": true, 00:05:51.741 "unmap": true, 00:05:51.741 "flush": true, 00:05:51.741 "reset": true, 00:05:51.741 "nvme_admin": false, 00:05:51.741 "nvme_io": false, 00:05:51.741 "nvme_io_md": false, 00:05:51.741 "write_zeroes": true, 00:05:51.741 "zcopy": true, 00:05:51.741 "get_zone_info": false, 00:05:51.741 "zone_management": false, 00:05:51.741 "zone_append": false, 00:05:51.741 "compare": false, 00:05:51.742 "compare_and_write": false, 00:05:51.742 "abort": true, 00:05:51.742 "seek_hole": false, 00:05:51.742 "seek_data": false, 00:05:51.742 "copy": true, 00:05:51.742 "nvme_iov_md": false 00:05:51.742 }, 00:05:51.742 "memory_domains": [ 00:05:51.742 { 00:05:51.742 "dma_device_id": "system", 00:05:51.742 "dma_device_type": 1 00:05:51.742 }, 00:05:51.742 { 00:05:51.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.742 "dma_device_type": 2 00:05:51.742 } 00:05:51.742 ], 00:05:51.742 "driver_specific": {} 00:05:51.742 } 00:05:51.742 ]' 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:51.742 01:49:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:51.742 00:05:51.742 real 0m0.167s 00:05:51.742 user 0m0.114s 00:05:51.742 sys 0m0.015s 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.742 01:49:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.742 ************************************ 00:05:51.742 END TEST rpc_plugins 00:05:51.742 ************************************ 00:05:51.742 01:49:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:51.742 01:49:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.742 01:49:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.742 01:49:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.742 ************************************ 00:05:51.742 START TEST rpc_trace_cmd_test 00:05:51.742 ************************************ 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:51.742 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72269", 00:05:51.742 "tpoint_group_mask": "0x8", 00:05:51.742 "iscsi_conn": { 00:05:51.742 "mask": "0x2", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "scsi": { 00:05:51.742 "mask": "0x4", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "bdev": { 00:05:51.742 "mask": "0x8", 00:05:51.742 "tpoint_mask": "0xffffffffffffffff" 00:05:51.742 }, 00:05:51.742 "nvmf_rdma": { 00:05:51.742 "mask": "0x10", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "nvmf_tcp": { 00:05:51.742 "mask": "0x20", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "ftl": { 00:05:51.742 "mask": "0x40", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "blobfs": { 00:05:51.742 "mask": "0x80", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "dsa": { 00:05:51.742 "mask": "0x200", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "thread": { 00:05:51.742 "mask": "0x400", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "nvme_pcie": { 00:05:51.742 "mask": "0x800", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "iaa": { 00:05:51.742 "mask": "0x1000", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "nvme_tcp": { 00:05:51.742 "mask": "0x2000", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "bdev_nvme": { 00:05:51.742 "mask": "0x4000", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 }, 00:05:51.742 "sock": { 00:05:51.742 "mask": "0x8000", 00:05:51.742 "tpoint_mask": "0x0" 00:05:51.742 } 00:05:51.742 }' 00:05:51.742 01:49:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:51.742 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:51.742 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:52.000 00:05:52.000 real 0m0.277s 00:05:52.000 user 0m0.243s 00:05:52.000 sys 0m0.022s 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.000 01:49:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.000 ************************************ 00:05:52.000 END TEST rpc_trace_cmd_test 00:05:52.000 ************************************ 00:05:52.000 01:49:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:52.000 01:49:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:52.000 01:49:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:52.000 01:49:07 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.000 01:49:07 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.000 01:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.000 ************************************ 00:05:52.000 START TEST rpc_daemon_integrity 00:05:52.000 ************************************ 00:05:52.000 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:52.000 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:52.000 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.000 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:52.259 { 00:05:52.259 "name": "Malloc2", 00:05:52.259 "aliases": [ 00:05:52.259 "1733e0b5-d24c-4a96-9b91-0da4e0bc621f" 00:05:52.259 ], 00:05:52.259 "product_name": "Malloc disk", 00:05:52.259 "block_size": 512, 00:05:52.259 "num_blocks": 16384, 00:05:52.259 "uuid": "1733e0b5-d24c-4a96-9b91-0da4e0bc621f", 00:05:52.259 "assigned_rate_limits": { 00:05:52.259 "rw_ios_per_sec": 0, 00:05:52.259 "rw_mbytes_per_sec": 0, 00:05:52.259 "r_mbytes_per_sec": 0, 00:05:52.259 "w_mbytes_per_sec": 0 00:05:52.259 }, 00:05:52.259 "claimed": false, 00:05:52.259 "zoned": false, 00:05:52.259 "supported_io_types": { 00:05:52.259 "read": true, 00:05:52.259 "write": true, 00:05:52.259 "unmap": true, 00:05:52.259 "flush": true, 00:05:52.259 "reset": true, 00:05:52.259 "nvme_admin": false, 00:05:52.259 "nvme_io": false, 00:05:52.259 "nvme_io_md": false, 00:05:52.259 "write_zeroes": true, 00:05:52.259 "zcopy": true, 00:05:52.259 "get_zone_info": false, 00:05:52.259 "zone_management": false, 00:05:52.259 "zone_append": false, 00:05:52.259 "compare": false, 00:05:52.259 "compare_and_write": false, 00:05:52.259 "abort": true, 00:05:52.259 "seek_hole": false, 00:05:52.259 "seek_data": false, 00:05:52.259 "copy": true, 00:05:52.259 "nvme_iov_md": false 00:05:52.259 }, 00:05:52.259 "memory_domains": [ 00:05:52.259 { 00:05:52.259 "dma_device_id": "system", 00:05:52.259 "dma_device_type": 1 00:05:52.259 }, 00:05:52.259 { 00:05:52.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.259 "dma_device_type": 2 00:05:52.259 } 00:05:52.259 ], 00:05:52.259 "driver_specific": {} 00:05:52.259 } 00:05:52.259 ]' 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.259 [2024-07-25 01:49:07.448484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:52.259 [2024-07-25 01:49:07.448521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:52.259 [2024-07-25 01:49:07.448552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e90590 00:05:52.259 [2024-07-25 01:49:07.448559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:52.259 [2024-07-25 01:49:07.449803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:52.259 [2024-07-25 01:49:07.449870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:52.259 Passthru0 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.259 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:52.259 { 00:05:52.259 "name": "Malloc2", 00:05:52.259 "aliases": [ 00:05:52.259 "1733e0b5-d24c-4a96-9b91-0da4e0bc621f" 00:05:52.259 ], 00:05:52.259 "product_name": "Malloc disk", 00:05:52.259 "block_size": 512, 00:05:52.259 "num_blocks": 16384, 00:05:52.259 "uuid": "1733e0b5-d24c-4a96-9b91-0da4e0bc621f", 00:05:52.259 "assigned_rate_limits": { 00:05:52.259 "rw_ios_per_sec": 0, 00:05:52.259 "rw_mbytes_per_sec": 0, 00:05:52.259 "r_mbytes_per_sec": 0, 00:05:52.259 "w_mbytes_per_sec": 0 00:05:52.259 }, 00:05:52.259 "claimed": true, 00:05:52.259 "claim_type": "exclusive_write", 00:05:52.259 "zoned": false, 00:05:52.259 "supported_io_types": { 00:05:52.259 "read": true, 00:05:52.259 "write": true, 00:05:52.259 "unmap": true, 00:05:52.259 "flush": true, 00:05:52.259 "reset": true, 00:05:52.259 "nvme_admin": false, 00:05:52.259 "nvme_io": false, 00:05:52.259 "nvme_io_md": false, 00:05:52.259 "write_zeroes": true, 00:05:52.259 "zcopy": true, 00:05:52.259 "get_zone_info": false, 00:05:52.259 "zone_management": false, 00:05:52.259 "zone_append": false, 00:05:52.259 "compare": false, 00:05:52.259 "compare_and_write": false, 00:05:52.259 "abort": true, 00:05:52.259 "seek_hole": false, 00:05:52.259 "seek_data": false, 00:05:52.259 "copy": true, 00:05:52.259 "nvme_iov_md": false 00:05:52.259 }, 00:05:52.259 "memory_domains": [ 00:05:52.259 { 00:05:52.259 "dma_device_id": "system", 00:05:52.259 "dma_device_type": 1 00:05:52.259 }, 00:05:52.259 { 00:05:52.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.259 "dma_device_type": 2 00:05:52.259 } 00:05:52.259 ], 00:05:52.259 "driver_specific": {} 00:05:52.259 }, 00:05:52.259 { 00:05:52.259 "name": "Passthru0", 00:05:52.259 "aliases": [ 00:05:52.259 "90eccb17-c10e-5637-99da-ff3c4aead624" 00:05:52.259 ], 00:05:52.259 "product_name": "passthru", 00:05:52.259 "block_size": 512, 00:05:52.259 "num_blocks": 16384, 00:05:52.259 "uuid": "90eccb17-c10e-5637-99da-ff3c4aead624", 00:05:52.259 "assigned_rate_limits": { 00:05:52.259 "rw_ios_per_sec": 0, 00:05:52.259 "rw_mbytes_per_sec": 0, 00:05:52.259 "r_mbytes_per_sec": 0, 00:05:52.259 "w_mbytes_per_sec": 0 00:05:52.259 }, 00:05:52.259 "claimed": false, 00:05:52.259 "zoned": false, 00:05:52.259 "supported_io_types": { 00:05:52.259 "read": true, 00:05:52.259 "write": true, 00:05:52.259 "unmap": true, 00:05:52.259 "flush": true, 00:05:52.259 "reset": true, 00:05:52.259 "nvme_admin": false, 00:05:52.259 "nvme_io": false, 00:05:52.259 "nvme_io_md": false, 00:05:52.259 "write_zeroes": true, 00:05:52.259 "zcopy": true, 00:05:52.259 "get_zone_info": false, 00:05:52.259 "zone_management": false, 00:05:52.259 "zone_append": false, 00:05:52.259 "compare": false, 00:05:52.259 "compare_and_write": false, 00:05:52.259 "abort": true, 00:05:52.259 "seek_hole": false, 00:05:52.259 "seek_data": false, 00:05:52.259 "copy": true, 00:05:52.259 "nvme_iov_md": false 00:05:52.259 }, 00:05:52.259 "memory_domains": [ 00:05:52.259 { 00:05:52.259 "dma_device_id": "system", 00:05:52.259 "dma_device_type": 1 00:05:52.259 }, 00:05:52.259 { 00:05:52.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.259 "dma_device_type": 2 00:05:52.259 } 00:05:52.259 ], 00:05:52.260 "driver_specific": { 00:05:52.260 "passthru": { 00:05:52.260 "name": "Passthru0", 00:05:52.260 "base_bdev_name": "Malloc2" 00:05:52.260 } 00:05:52.260 } 00:05:52.260 } 00:05:52.260 ]' 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.260 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.518 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.519 01:49:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.519 00:05:52.519 real 0m0.337s 00:05:52.519 user 0m0.220s 00:05:52.519 sys 0m0.040s 00:05:52.519 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.519 01:49:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.519 ************************************ 00:05:52.519 END TEST rpc_daemon_integrity 00:05:52.519 ************************************ 00:05:52.519 01:49:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:52.519 01:49:07 rpc -- rpc/rpc.sh@84 -- # killprocess 72269 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@950 -- # '[' -z 72269 ']' 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@954 -- # kill -0 72269 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@955 -- # uname 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72269 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.519 killing process with pid 72269 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72269' 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@969 -- # kill 72269 00:05:52.519 01:49:07 rpc -- common/autotest_common.sh@974 -- # wait 72269 00:05:52.777 00:05:52.777 real 0m2.024s 00:05:52.777 user 0m2.805s 00:05:52.777 sys 0m0.501s 00:05:52.777 01:49:07 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.777 01:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.777 ************************************ 00:05:52.777 END TEST rpc 00:05:52.777 ************************************ 00:05:52.777 01:49:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.777 01:49:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.777 01:49:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.777 01:49:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.777 ************************************ 00:05:52.777 START TEST skip_rpc 00:05:52.777 ************************************ 00:05:52.777 01:49:07 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.777 * Looking for test storage... 00:05:52.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.777 01:49:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.777 01:49:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:52.777 01:49:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:52.777 01:49:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.777 01:49:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.777 01:49:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.777 ************************************ 00:05:52.777 START TEST skip_rpc 00:05:52.777 ************************************ 00:05:52.777 01:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:52.777 01:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72454 00:05:52.777 01:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.777 01:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:52.777 01:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:53.036 [2024-07-25 01:49:08.102522] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:53.036 [2024-07-25 01:49:08.102609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72454 ] 00:05:53.036 [2024-07-25 01:49:08.223757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.036 [2024-07-25 01:49:08.242052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.036 [2024-07-25 01:49:08.279852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.036 [2024-07-25 01:49:08.308755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 72454 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 72454 ']' 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 72454 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72454 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.304 killing process with pid 72454 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72454' 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 72454 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 72454 00:05:58.304 00:05:58.304 real 0m5.250s 00:05:58.304 user 0m4.988s 00:05:58.304 sys 0m0.171s 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.304 01:49:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.304 ************************************ 00:05:58.304 END TEST skip_rpc 00:05:58.304 ************************************ 00:05:58.304 01:49:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:58.304 01:49:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.304 01:49:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.304 01:49:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.304 ************************************ 00:05:58.304 START TEST skip_rpc_with_json 00:05:58.304 ************************************ 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72541 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 72541 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 72541 ']' 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.304 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.304 [2024-07-25 01:49:13.385947] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:05:58.304 [2024-07-25 01:49:13.386020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:05:58.304 [2024-07-25 01:49:13.500557] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.304 [2024-07-25 01:49:13.515267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.304 [2024-07-25 01:49:13.548497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.304 [2024-07-25 01:49:13.574672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.561 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.561 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:58.561 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:58.561 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.561 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 [2024-07-25 01:49:13.686010] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:58.561 request: 00:05:58.561 { 00:05:58.562 "trtype": "tcp", 00:05:58.562 "method": "nvmf_get_transports", 00:05:58.562 "req_id": 1 00:05:58.562 } 00:05:58.562 Got JSON-RPC error response 00:05:58.562 response: 00:05:58.562 { 00:05:58.562 "code": -19, 00:05:58.562 "message": "No such device" 00:05:58.562 } 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.562 [2024-07-25 01:49:13.698083] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.562 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.820 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.820 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.820 { 00:05:58.820 "subsystems": [ 00:05:58.820 { 00:05:58.820 "subsystem": "keyring", 00:05:58.820 "config": [] 00:05:58.820 }, 00:05:58.820 { 00:05:58.820 "subsystem": "iobuf", 00:05:58.820 "config": [ 00:05:58.820 { 00:05:58.820 "method": "iobuf_set_options", 00:05:58.820 "params": { 00:05:58.820 "small_pool_count": 8192, 00:05:58.820 "large_pool_count": 1024, 00:05:58.820 "small_bufsize": 8192, 00:05:58.820 "large_bufsize": 135168 00:05:58.820 } 00:05:58.820 } 00:05:58.820 ] 00:05:58.820 }, 00:05:58.820 { 00:05:58.820 "subsystem": "sock", 00:05:58.820 "config": [ 00:05:58.820 { 00:05:58.820 "method": "sock_set_default_impl", 00:05:58.820 "params": { 00:05:58.820 "impl_name": "uring" 00:05:58.820 } 00:05:58.820 }, 00:05:58.820 { 00:05:58.820 "method": "sock_impl_set_options", 00:05:58.820 "params": { 00:05:58.820 "impl_name": "ssl", 00:05:58.820 "recv_buf_size": 4096, 00:05:58.820 "send_buf_size": 4096, 00:05:58.820 "enable_recv_pipe": true, 00:05:58.820 "enable_quickack": false, 00:05:58.820 "enable_placement_id": 0, 00:05:58.820 "enable_zerocopy_send_server": true, 00:05:58.820 "enable_zerocopy_send_client": false, 00:05:58.820 "zerocopy_threshold": 0, 00:05:58.820 "tls_version": 0, 00:05:58.821 "enable_ktls": false 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "sock_impl_set_options", 00:05:58.821 "params": { 00:05:58.821 "impl_name": "posix", 00:05:58.821 "recv_buf_size": 2097152, 00:05:58.821 "send_buf_size": 2097152, 00:05:58.821 "enable_recv_pipe": true, 00:05:58.821 "enable_quickack": false, 00:05:58.821 "enable_placement_id": 0, 00:05:58.821 "enable_zerocopy_send_server": true, 00:05:58.821 "enable_zerocopy_send_client": false, 00:05:58.821 "zerocopy_threshold": 0, 00:05:58.821 "tls_version": 0, 00:05:58.821 "enable_ktls": false 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "sock_impl_set_options", 00:05:58.821 "params": { 00:05:58.821 "impl_name": "uring", 00:05:58.821 "recv_buf_size": 2097152, 00:05:58.821 "send_buf_size": 2097152, 00:05:58.821 "enable_recv_pipe": true, 00:05:58.821 "enable_quickack": false, 00:05:58.821 "enable_placement_id": 0, 00:05:58.821 "enable_zerocopy_send_server": false, 00:05:58.821 "enable_zerocopy_send_client": false, 00:05:58.821 "zerocopy_threshold": 0, 00:05:58.821 "tls_version": 0, 00:05:58.821 "enable_ktls": false 00:05:58.821 } 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "vmd", 00:05:58.821 "config": [] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "accel", 00:05:58.821 "config": [ 00:05:58.821 { 00:05:58.821 "method": "accel_set_options", 00:05:58.821 "params": { 00:05:58.821 "small_cache_size": 128, 00:05:58.821 "large_cache_size": 16, 00:05:58.821 "task_count": 2048, 00:05:58.821 "sequence_count": 2048, 00:05:58.821 "buf_count": 2048 00:05:58.821 } 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "bdev", 00:05:58.821 "config": [ 00:05:58.821 { 00:05:58.821 "method": "bdev_set_options", 00:05:58.821 "params": { 00:05:58.821 "bdev_io_pool_size": 65535, 00:05:58.821 "bdev_io_cache_size": 256, 00:05:58.821 "bdev_auto_examine": true, 00:05:58.821 "iobuf_small_cache_size": 128, 00:05:58.821 "iobuf_large_cache_size": 16 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "bdev_raid_set_options", 00:05:58.821 "params": { 00:05:58.821 "process_window_size_kb": 1024, 00:05:58.821 "process_max_bandwidth_mb_sec": 0 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "bdev_iscsi_set_options", 00:05:58.821 "params": { 00:05:58.821 "timeout_sec": 30 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "bdev_nvme_set_options", 00:05:58.821 "params": { 00:05:58.821 "action_on_timeout": "none", 00:05:58.821 "timeout_us": 0, 00:05:58.821 "timeout_admin_us": 0, 00:05:58.821 "keep_alive_timeout_ms": 10000, 00:05:58.821 "arbitration_burst": 0, 00:05:58.821 "low_priority_weight": 0, 00:05:58.821 "medium_priority_weight": 0, 00:05:58.821 "high_priority_weight": 0, 00:05:58.821 "nvme_adminq_poll_period_us": 10000, 00:05:58.821 "nvme_ioq_poll_period_us": 0, 00:05:58.821 "io_queue_requests": 0, 00:05:58.821 "delay_cmd_submit": true, 00:05:58.821 "transport_retry_count": 4, 00:05:58.821 "bdev_retry_count": 3, 00:05:58.821 "transport_ack_timeout": 0, 00:05:58.821 "ctrlr_loss_timeout_sec": 0, 00:05:58.821 "reconnect_delay_sec": 0, 00:05:58.821 "fast_io_fail_timeout_sec": 0, 00:05:58.821 "disable_auto_failback": false, 00:05:58.821 "generate_uuids": false, 00:05:58.821 "transport_tos": 0, 00:05:58.821 "nvme_error_stat": false, 00:05:58.821 "rdma_srq_size": 0, 00:05:58.821 "io_path_stat": false, 00:05:58.821 "allow_accel_sequence": false, 00:05:58.821 "rdma_max_cq_size": 0, 00:05:58.821 "rdma_cm_event_timeout_ms": 0, 00:05:58.821 "dhchap_digests": [ 00:05:58.821 "sha256", 00:05:58.821 "sha384", 00:05:58.821 "sha512" 00:05:58.821 ], 00:05:58.821 "dhchap_dhgroups": [ 00:05:58.821 "null", 00:05:58.821 "ffdhe2048", 00:05:58.821 "ffdhe3072", 00:05:58.821 "ffdhe4096", 00:05:58.821 "ffdhe6144", 00:05:58.821 "ffdhe8192" 00:05:58.821 ] 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "bdev_nvme_set_hotplug", 00:05:58.821 "params": { 00:05:58.821 "period_us": 100000, 00:05:58.821 "enable": false 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "bdev_wait_for_examine" 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "scsi", 00:05:58.821 "config": null 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "scheduler", 00:05:58.821 "config": [ 00:05:58.821 { 00:05:58.821 "method": "framework_set_scheduler", 00:05:58.821 "params": { 00:05:58.821 "name": "static" 00:05:58.821 } 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "vhost_scsi", 00:05:58.821 "config": [] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "vhost_blk", 00:05:58.821 "config": [] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "ublk", 00:05:58.821 "config": [] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "nbd", 00:05:58.821 "config": [] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "nvmf", 00:05:58.821 "config": [ 00:05:58.821 { 00:05:58.821 "method": "nvmf_set_config", 00:05:58.821 "params": { 00:05:58.821 "discovery_filter": "match_any", 00:05:58.821 "admin_cmd_passthru": { 00:05:58.821 "identify_ctrlr": false 00:05:58.821 } 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "nvmf_set_max_subsystems", 00:05:58.821 "params": { 00:05:58.821 "max_subsystems": 1024 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "nvmf_set_crdt", 00:05:58.821 "params": { 00:05:58.821 "crdt1": 0, 00:05:58.821 "crdt2": 0, 00:05:58.821 "crdt3": 0 00:05:58.821 } 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "method": "nvmf_create_transport", 00:05:58.821 "params": { 00:05:58.821 "trtype": "TCP", 00:05:58.821 "max_queue_depth": 128, 00:05:58.821 "max_io_qpairs_per_ctrlr": 127, 00:05:58.821 "in_capsule_data_size": 4096, 00:05:58.821 "max_io_size": 131072, 00:05:58.821 "io_unit_size": 131072, 00:05:58.821 "max_aq_depth": 128, 00:05:58.821 "num_shared_buffers": 511, 00:05:58.821 "buf_cache_size": 4294967295, 00:05:58.821 "dif_insert_or_strip": false, 00:05:58.821 "zcopy": false, 00:05:58.821 "c2h_success": true, 00:05:58.821 "sock_priority": 0, 00:05:58.821 "abort_timeout_sec": 1, 00:05:58.821 "ack_timeout": 0, 00:05:58.821 "data_wr_pool_size": 0 00:05:58.821 } 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 }, 00:05:58.821 { 00:05:58.821 "subsystem": "iscsi", 00:05:58.821 "config": [ 00:05:58.821 { 00:05:58.821 "method": "iscsi_set_options", 00:05:58.821 "params": { 00:05:58.821 "node_base": "iqn.2016-06.io.spdk", 00:05:58.821 "max_sessions": 128, 00:05:58.821 "max_connections_per_session": 2, 00:05:58.821 "max_queue_depth": 64, 00:05:58.821 "default_time2wait": 2, 00:05:58.821 "default_time2retain": 20, 00:05:58.821 "first_burst_length": 8192, 00:05:58.821 "immediate_data": true, 00:05:58.821 "allow_duplicated_isid": false, 00:05:58.821 "error_recovery_level": 0, 00:05:58.821 "nop_timeout": 60, 00:05:58.821 "nop_in_interval": 30, 00:05:58.821 "disable_chap": false, 00:05:58.821 "require_chap": false, 00:05:58.821 "mutual_chap": false, 00:05:58.821 "chap_group": 0, 00:05:58.821 "max_large_datain_per_connection": 64, 00:05:58.821 "max_r2t_per_connection": 4, 00:05:58.821 "pdu_pool_size": 36864, 00:05:58.821 "immediate_data_pool_size": 16384, 00:05:58.821 "data_out_pool_size": 2048 00:05:58.821 } 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 } 00:05:58.821 ] 00:05:58.821 } 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 72541 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 72541 ']' 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 72541 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72541 00:05:58.821 killing process with pid 72541 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72541' 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 72541 00:05:58.821 01:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 72541 00:05:58.822 01:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72555 00:05:58.822 01:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.822 01:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 72555 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 72555 ']' 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 72555 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72555 00:06:04.088 killing process with pid 72555 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72555' 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 72555 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 72555 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.088 00:06:04.088 real 0m6.028s 00:06:04.088 user 0m5.787s 00:06:04.088 sys 0m0.386s 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.088 ************************************ 00:06:04.088 END TEST skip_rpc_with_json 00:06:04.088 ************************************ 00:06:04.088 01:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 01:49:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:04.347 01:49:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.347 01:49:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.347 01:49:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 ************************************ 00:06:04.347 START TEST skip_rpc_with_delay 00:06:04.347 ************************************ 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.347 [2024-07-25 01:49:19.487919] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:04.347 [2024-07-25 01:49:19.488039] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.347 00:06:04.347 real 0m0.085s 00:06:04.347 user 0m0.055s 00:06:04.347 sys 0m0.029s 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.347 ************************************ 00:06:04.347 END TEST skip_rpc_with_delay 00:06:04.347 ************************************ 00:06:04.347 01:49:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 01:49:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:04.347 01:49:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:04.347 01:49:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:04.347 01:49:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.347 01:49:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.347 01:49:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 ************************************ 00:06:04.347 START TEST exit_on_failed_rpc_init 00:06:04.347 ************************************ 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72665 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72665 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 72665 ']' 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.347 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.347 [2024-07-25 01:49:19.627419] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:04.347 [2024-07-25 01:49:19.627515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:06:04.606 [2024-07-25 01:49:19.749174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.606 [2024-07-25 01:49:19.768683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.606 [2024-07-25 01:49:19.806098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.606 [2024-07-25 01:49:19.834718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.864 01:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.864 [2024-07-25 01:49:20.021192] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:04.864 [2024-07-25 01:49:20.021285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72670 ] 00:06:04.864 [2024-07-25 01:49:20.143412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.123 [2024-07-25 01:49:20.163855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.123 [2024-07-25 01:49:20.205788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.123 [2024-07-25 01:49:20.205904] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.123 [2024-07-25 01:49:20.205923] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.123 [2024-07-25 01:49:20.205934] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72665 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 72665 ']' 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 72665 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72665 00:06:05.123 killing process with pid 72665 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72665' 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 72665 00:06:05.123 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 72665 00:06:05.382 00:06:05.383 real 0m0.969s 00:06:05.383 user 0m1.121s 00:06:05.383 sys 0m0.254s 00:06:05.383 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.383 ************************************ 00:06:05.383 END TEST exit_on_failed_rpc_init 00:06:05.383 ************************************ 00:06:05.383 01:49:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.383 01:49:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.383 ************************************ 00:06:05.383 END TEST skip_rpc 00:06:05.383 ************************************ 00:06:05.383 00:06:05.383 real 0m12.627s 00:06:05.383 user 0m12.040s 00:06:05.383 sys 0m1.026s 00:06:05.383 01:49:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.383 01:49:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.383 01:49:20 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.383 01:49:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.383 01:49:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.383 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.383 ************************************ 00:06:05.383 START TEST rpc_client 00:06:05.383 ************************************ 00:06:05.383 01:49:20 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.642 * Looking for test storage... 00:06:05.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.642 01:49:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.642 OK 00:06:05.642 01:49:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.642 00:06:05.642 real 0m0.110s 00:06:05.642 user 0m0.048s 00:06:05.642 sys 0m0.067s 00:06:05.642 01:49:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.642 01:49:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.642 ************************************ 00:06:05.642 END TEST rpc_client 00:06:05.642 ************************************ 00:06:05.642 01:49:20 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.642 01:49:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.642 01:49:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.642 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.642 ************************************ 00:06:05.642 START TEST json_config 00:06:05.642 ************************************ 00:06:05.642 01:49:20 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.642 01:49:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.642 01:49:20 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.642 01:49:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.642 01:49:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.642 01:49:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.643 01:49:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.643 01:49:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.643 01:49:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.643 01:49:20 json_config -- paths/export.sh@5 -- # export PATH 00:06:05.643 01:49:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@47 -- # : 0 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.643 01:49:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:05.643 INFO: JSON configuration test init 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.643 01:49:20 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:05.643 01:49:20 json_config -- json_config/common.sh@9 -- # local app=target 00:06:05.643 01:49:20 json_config -- json_config/common.sh@10 -- # shift 00:06:05.643 01:49:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.643 01:49:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.643 01:49:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.643 01:49:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.643 01:49:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.643 01:49:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72788 00:06:05.643 01:49:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.643 Waiting for target to run... 00:06:05.643 01:49:20 json_config -- json_config/common.sh@25 -- # waitforlisten 72788 /var/tmp/spdk_tgt.sock 00:06:05.643 01:49:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 72788 ']' 00:06:05.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.643 01:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.903 [2024-07-25 01:49:20.982511] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:05.903 [2024-07-25 01:49:20.983042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72788 ] 00:06:06.161 [2024-07-25 01:49:21.305384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.161 [2024-07-25 01:49:21.325130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.161 [2024-07-25 01:49:21.349005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.767 00:06:06.767 01:49:21 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.767 01:49:21 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:06.767 01:49:21 json_config -- json_config/common.sh@26 -- # echo '' 00:06:06.767 01:49:21 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:06.767 01:49:21 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:06.767 01:49:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.767 01:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.767 01:49:21 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:06.767 01:49:21 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:06.767 01:49:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.767 01:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.767 01:49:21 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:06.767 01:49:21 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:06.767 01:49:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:07.026 [2024-07-25 01:49:22.265561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:07.285 01:49:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.285 01:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:07.285 01:49:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:07.285 01:49:22 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@51 -- # sort 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:07.544 01:49:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.544 01:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:07.544 01:49:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.544 01:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:07.544 01:49:22 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.545 01:49:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.803 MallocForNvmf0 00:06:07.803 01:49:22 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.803 01:49:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.062 MallocForNvmf1 00:06:08.062 01:49:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.062 01:49:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.062 [2024-07-25 01:49:23.356763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.319 01:49:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.319 01:49:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.577 01:49:23 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.577 01:49:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.835 01:49:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.835 01:49:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.835 01:49:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.835 01:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.094 [2024-07-25 01:49:24.305112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.094 01:49:24 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:09.094 01:49:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.094 01:49:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.094 01:49:24 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:09.094 01:49:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.094 01:49:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.094 01:49:24 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:09.094 01:49:24 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.094 01:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.352 MallocBdevForConfigChangeCheck 00:06:09.352 01:49:24 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:09.352 01:49:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.352 01:49:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.352 01:49:24 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:09.352 01:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.919 INFO: shutting down applications... 00:06:09.919 01:49:24 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:09.919 01:49:24 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:09.919 01:49:24 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:09.919 01:49:24 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:09.919 01:49:24 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:10.176 Calling clear_iscsi_subsystem 00:06:10.176 Calling clear_nvmf_subsystem 00:06:10.176 Calling clear_nbd_subsystem 00:06:10.176 Calling clear_ublk_subsystem 00:06:10.176 Calling clear_vhost_blk_subsystem 00:06:10.176 Calling clear_vhost_scsi_subsystem 00:06:10.176 Calling clear_bdev_subsystem 00:06:10.176 01:49:25 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:10.176 01:49:25 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:10.176 01:49:25 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:10.176 01:49:25 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.176 01:49:25 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:10.176 01:49:25 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:10.434 01:49:25 json_config -- json_config/json_config.sh@349 -- # break 00:06:10.434 01:49:25 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:10.434 01:49:25 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:10.434 01:49:25 json_config -- json_config/common.sh@31 -- # local app=target 00:06:10.434 01:49:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.434 01:49:25 json_config -- json_config/common.sh@35 -- # [[ -n 72788 ]] 00:06:10.434 01:49:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72788 00:06:10.434 01:49:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.434 01:49:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.434 01:49:25 json_config -- json_config/common.sh@41 -- # kill -0 72788 00:06:10.434 01:49:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.001 01:49:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.001 01:49:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.001 01:49:26 json_config -- json_config/common.sh@41 -- # kill -0 72788 00:06:11.001 01:49:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.001 01:49:26 json_config -- json_config/common.sh@43 -- # break 00:06:11.001 SPDK target shutdown done 00:06:11.001 01:49:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.001 01:49:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.001 INFO: relaunching applications... 00:06:11.001 01:49:26 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:11.001 01:49:26 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.001 01:49:26 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.001 01:49:26 json_config -- json_config/common.sh@10 -- # shift 00:06:11.001 01:49:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.001 01:49:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.001 01:49:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.001 01:49:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.001 01:49:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.001 01:49:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72978 00:06:11.001 01:49:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.001 01:49:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.001 Waiting for target to run... 00:06:11.001 01:49:26 json_config -- json_config/common.sh@25 -- # waitforlisten 72978 /var/tmp/spdk_tgt.sock 00:06:11.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.001 01:49:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 72978 ']' 00:06:11.001 01:49:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.001 01:49:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.001 01:49:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.001 01:49:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.001 01:49:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.001 [2024-07-25 01:49:26.204991] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:11.001 [2024-07-25 01:49:26.205299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72978 ] 00:06:11.260 [2024-07-25 01:49:26.491548] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.260 [2024-07-25 01:49:26.508821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.260 [2024-07-25 01:49:26.528145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.518 [2024-07-25 01:49:26.654175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.777 [2024-07-25 01:49:26.836937] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.777 [2024-07-25 01:49:26.868988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:12.036 00:06:12.036 INFO: Checking if target configuration is the same... 00:06:12.036 01:49:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.036 01:49:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:12.036 01:49:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.036 01:49:27 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:12.036 01:49:27 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.036 01:49:27 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:12.036 01:49:27 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.036 01:49:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.036 + '[' 2 -ne 2 ']' 00:06:12.036 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:12.036 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:12.036 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:12.036 +++ basename /dev/fd/62 00:06:12.036 ++ mktemp /tmp/62.XXX 00:06:12.036 + tmp_file_1=/tmp/62.QGb 00:06:12.036 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.036 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.036 + tmp_file_2=/tmp/spdk_tgt_config.json.JS8 00:06:12.036 + ret=0 00:06:12.036 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.294 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.294 + diff -u /tmp/62.QGb /tmp/spdk_tgt_config.json.JS8 00:06:12.294 INFO: JSON config files are the same 00:06:12.294 + echo 'INFO: JSON config files are the same' 00:06:12.294 + rm /tmp/62.QGb /tmp/spdk_tgt_config.json.JS8 00:06:12.294 + exit 0 00:06:12.294 INFO: changing configuration and checking if this can be detected... 00:06:12.294 01:49:27 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:12.294 01:49:27 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.294 01:49:27 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.294 01:49:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.552 01:49:27 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.552 01:49:27 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:12.552 01:49:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.552 + '[' 2 -ne 2 ']' 00:06:12.552 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:12.552 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:12.552 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:12.552 +++ basename /dev/fd/62 00:06:12.552 ++ mktemp /tmp/62.XXX 00:06:12.552 + tmp_file_1=/tmp/62.Un8 00:06:12.552 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.552 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.552 + tmp_file_2=/tmp/spdk_tgt_config.json.mfY 00:06:12.552 + ret=0 00:06:12.552 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.810 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.069 + diff -u /tmp/62.Un8 /tmp/spdk_tgt_config.json.mfY 00:06:13.069 + ret=1 00:06:13.069 + echo '=== Start of file: /tmp/62.Un8 ===' 00:06:13.069 + cat /tmp/62.Un8 00:06:13.069 + echo '=== End of file: /tmp/62.Un8 ===' 00:06:13.069 + echo '' 00:06:13.069 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mfY ===' 00:06:13.069 + cat /tmp/spdk_tgt_config.json.mfY 00:06:13.069 + echo '=== End of file: /tmp/spdk_tgt_config.json.mfY ===' 00:06:13.069 + echo '' 00:06:13.069 + rm /tmp/62.Un8 /tmp/spdk_tgt_config.json.mfY 00:06:13.069 + exit 1 00:06:13.069 INFO: configuration change detected. 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@321 -- # [[ -n 72978 ]] 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.069 01:49:28 json_config -- json_config/json_config.sh@327 -- # killprocess 72978 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@950 -- # '[' -z 72978 ']' 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@954 -- # kill -0 72978 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@955 -- # uname 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72978 00:06:13.069 killing process with pid 72978 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72978' 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@969 -- # kill 72978 00:06:13.069 01:49:28 json_config -- common/autotest_common.sh@974 -- # wait 72978 00:06:13.328 01:49:28 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.328 01:49:28 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:13.328 01:49:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.328 01:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.328 INFO: Success 00:06:13.328 01:49:28 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:13.328 01:49:28 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:13.328 ************************************ 00:06:13.328 END TEST json_config 00:06:13.328 ************************************ 00:06:13.328 00:06:13.328 real 0m7.622s 00:06:13.328 user 0m10.821s 00:06:13.328 sys 0m1.455s 00:06:13.328 01:49:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.328 01:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.328 01:49:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.328 01:49:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.328 01:49:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.328 01:49:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.328 ************************************ 00:06:13.328 START TEST json_config_extra_key 00:06:13.328 ************************************ 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.328 01:49:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.328 01:49:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.328 01:49:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.328 01:49:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.328 01:49:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.328 01:49:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.328 01:49:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.328 01:49:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.328 01:49:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.328 INFO: launching applications... 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.328 01:49:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.328 Waiting for target to run... 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=73124 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 73124 /var/tmp/spdk_tgt.sock 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 73124 ']' 00:06:13.328 01:49:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.328 01:49:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.328 [2024-07-25 01:49:28.615970] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:13.328 [2024-07-25 01:49:28.616064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73124 ] 00:06:13.895 [2024-07-25 01:49:28.898003] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.895 [2024-07-25 01:49:28.916845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.895 [2024-07-25 01:49:28.937382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.895 [2024-07-25 01:49:28.957367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.462 00:06:14.462 INFO: shutting down applications... 00:06:14.462 01:49:29 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.462 01:49:29 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.462 01:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.462 01:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 73124 ]] 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 73124 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73124 00:06:14.462 01:49:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73124 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.028 01:49:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.028 SPDK target shutdown done 00:06:15.028 01:49:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.028 Success 00:06:15.028 00:06:15.028 real 0m1.624s 00:06:15.028 user 0m1.464s 00:06:15.028 sys 0m0.286s 00:06:15.028 01:49:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.028 ************************************ 00:06:15.028 END TEST json_config_extra_key 00:06:15.028 ************************************ 00:06:15.028 01:49:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 01:49:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.028 01:49:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.028 01:49:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.028 01:49:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.028 ************************************ 00:06:15.028 START TEST alias_rpc 00:06:15.028 ************************************ 00:06:15.028 01:49:30 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.028 * Looking for test storage... 00:06:15.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:15.028 01:49:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.029 01:49:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=73183 00:06:15.029 01:49:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 73183 00:06:15.029 01:49:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.029 01:49:30 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 73183 ']' 00:06:15.029 01:49:30 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.029 01:49:30 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.029 01:49:30 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.029 01:49:30 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.029 01:49:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 [2024-07-25 01:49:30.288043] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:15.029 [2024-07-25 01:49:30.288310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73183 ] 00:06:15.287 [2024-07-25 01:49:30.406157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.287 [2024-07-25 01:49:30.426476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.287 [2024-07-25 01:49:30.459832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.287 [2024-07-25 01:49:30.489957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.545 01:49:30 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.545 01:49:30 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:15.545 01:49:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:15.803 01:49:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 73183 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 73183 ']' 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 73183 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73183 00:06:15.803 killing process with pid 73183 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73183' 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@969 -- # kill 73183 00:06:15.803 01:49:30 alias_rpc -- common/autotest_common.sh@974 -- # wait 73183 00:06:15.803 ************************************ 00:06:15.803 END TEST alias_rpc 00:06:15.803 ************************************ 00:06:15.803 00:06:15.803 real 0m0.924s 00:06:15.803 user 0m1.058s 00:06:15.803 sys 0m0.265s 00:06:15.803 01:49:31 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.803 01:49:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.061 01:49:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:16.061 01:49:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.061 01:49:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.061 01:49:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.061 01:49:31 -- common/autotest_common.sh@10 -- # set +x 00:06:16.061 ************************************ 00:06:16.061 START TEST spdkcli_tcp 00:06:16.061 ************************************ 00:06:16.061 01:49:31 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.061 * Looking for test storage... 00:06:16.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.061 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.061 01:49:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.061 01:49:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.062 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=73246 00:06:16.062 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 73246 00:06:16.062 01:49:31 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 73246 ']' 00:06:16.062 01:49:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.062 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.062 01:49:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.062 01:49:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.062 01:49:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.062 01:49:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.062 [2024-07-25 01:49:31.276179] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:16.062 [2024-07-25 01:49:31.276267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73246 ] 00:06:16.320 [2024-07-25 01:49:31.398486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.320 [2024-07-25 01:49:31.416867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.320 [2024-07-25 01:49:31.451929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.320 [2024-07-25 01:49:31.451938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.320 [2024-07-25 01:49:31.480293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.320 01:49:31 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.320 01:49:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:16.320 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=73260 00:06:16.320 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:16.320 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:16.578 [ 00:06:16.578 "bdev_malloc_delete", 00:06:16.578 "bdev_malloc_create", 00:06:16.578 "bdev_null_resize", 00:06:16.578 "bdev_null_delete", 00:06:16.578 "bdev_null_create", 00:06:16.578 "bdev_nvme_cuse_unregister", 00:06:16.578 "bdev_nvme_cuse_register", 00:06:16.578 "bdev_opal_new_user", 00:06:16.578 "bdev_opal_set_lock_state", 00:06:16.578 "bdev_opal_delete", 00:06:16.578 "bdev_opal_get_info", 00:06:16.578 "bdev_opal_create", 00:06:16.578 "bdev_nvme_opal_revert", 00:06:16.578 "bdev_nvme_opal_init", 00:06:16.578 "bdev_nvme_send_cmd", 00:06:16.578 "bdev_nvme_get_path_iostat", 00:06:16.578 "bdev_nvme_get_mdns_discovery_info", 00:06:16.578 "bdev_nvme_stop_mdns_discovery", 00:06:16.578 "bdev_nvme_start_mdns_discovery", 00:06:16.578 "bdev_nvme_set_multipath_policy", 00:06:16.578 "bdev_nvme_set_preferred_path", 00:06:16.578 "bdev_nvme_get_io_paths", 00:06:16.578 "bdev_nvme_remove_error_injection", 00:06:16.578 "bdev_nvme_add_error_injection", 00:06:16.578 "bdev_nvme_get_discovery_info", 00:06:16.578 "bdev_nvme_stop_discovery", 00:06:16.578 "bdev_nvme_start_discovery", 00:06:16.578 "bdev_nvme_get_controller_health_info", 00:06:16.578 "bdev_nvme_disable_controller", 00:06:16.578 "bdev_nvme_enable_controller", 00:06:16.578 "bdev_nvme_reset_controller", 00:06:16.578 "bdev_nvme_get_transport_statistics", 00:06:16.578 "bdev_nvme_apply_firmware", 00:06:16.578 "bdev_nvme_detach_controller", 00:06:16.578 "bdev_nvme_get_controllers", 00:06:16.578 "bdev_nvme_attach_controller", 00:06:16.578 "bdev_nvme_set_hotplug", 00:06:16.578 "bdev_nvme_set_options", 00:06:16.578 "bdev_passthru_delete", 00:06:16.578 "bdev_passthru_create", 00:06:16.578 "bdev_lvol_set_parent_bdev", 00:06:16.578 "bdev_lvol_set_parent", 00:06:16.578 "bdev_lvol_check_shallow_copy", 00:06:16.578 "bdev_lvol_start_shallow_copy", 00:06:16.578 "bdev_lvol_grow_lvstore", 00:06:16.578 "bdev_lvol_get_lvols", 00:06:16.578 "bdev_lvol_get_lvstores", 00:06:16.578 "bdev_lvol_delete", 00:06:16.578 "bdev_lvol_set_read_only", 00:06:16.578 "bdev_lvol_resize", 00:06:16.578 "bdev_lvol_decouple_parent", 00:06:16.578 "bdev_lvol_inflate", 00:06:16.578 "bdev_lvol_rename", 00:06:16.578 "bdev_lvol_clone_bdev", 00:06:16.578 "bdev_lvol_clone", 00:06:16.578 "bdev_lvol_snapshot", 00:06:16.578 "bdev_lvol_create", 00:06:16.578 "bdev_lvol_delete_lvstore", 00:06:16.578 "bdev_lvol_rename_lvstore", 00:06:16.578 "bdev_lvol_create_lvstore", 00:06:16.578 "bdev_raid_set_options", 00:06:16.578 "bdev_raid_remove_base_bdev", 00:06:16.578 "bdev_raid_add_base_bdev", 00:06:16.578 "bdev_raid_delete", 00:06:16.578 "bdev_raid_create", 00:06:16.578 "bdev_raid_get_bdevs", 00:06:16.578 "bdev_error_inject_error", 00:06:16.578 "bdev_error_delete", 00:06:16.578 "bdev_error_create", 00:06:16.578 "bdev_split_delete", 00:06:16.579 "bdev_split_create", 00:06:16.579 "bdev_delay_delete", 00:06:16.579 "bdev_delay_create", 00:06:16.579 "bdev_delay_update_latency", 00:06:16.579 "bdev_zone_block_delete", 00:06:16.579 "bdev_zone_block_create", 00:06:16.579 "blobfs_create", 00:06:16.579 "blobfs_detect", 00:06:16.579 "blobfs_set_cache_size", 00:06:16.579 "bdev_aio_delete", 00:06:16.579 "bdev_aio_rescan", 00:06:16.579 "bdev_aio_create", 00:06:16.579 "bdev_ftl_set_property", 00:06:16.579 "bdev_ftl_get_properties", 00:06:16.579 "bdev_ftl_get_stats", 00:06:16.579 "bdev_ftl_unmap", 00:06:16.579 "bdev_ftl_unload", 00:06:16.579 "bdev_ftl_delete", 00:06:16.579 "bdev_ftl_load", 00:06:16.579 "bdev_ftl_create", 00:06:16.579 "bdev_virtio_attach_controller", 00:06:16.579 "bdev_virtio_scsi_get_devices", 00:06:16.579 "bdev_virtio_detach_controller", 00:06:16.579 "bdev_virtio_blk_set_hotplug", 00:06:16.579 "bdev_iscsi_delete", 00:06:16.579 "bdev_iscsi_create", 00:06:16.579 "bdev_iscsi_set_options", 00:06:16.579 "bdev_uring_delete", 00:06:16.579 "bdev_uring_rescan", 00:06:16.579 "bdev_uring_create", 00:06:16.579 "accel_error_inject_error", 00:06:16.579 "ioat_scan_accel_module", 00:06:16.579 "dsa_scan_accel_module", 00:06:16.579 "iaa_scan_accel_module", 00:06:16.579 "keyring_file_remove_key", 00:06:16.579 "keyring_file_add_key", 00:06:16.579 "keyring_linux_set_options", 00:06:16.579 "iscsi_get_histogram", 00:06:16.579 "iscsi_enable_histogram", 00:06:16.579 "iscsi_set_options", 00:06:16.579 "iscsi_get_auth_groups", 00:06:16.579 "iscsi_auth_group_remove_secret", 00:06:16.579 "iscsi_auth_group_add_secret", 00:06:16.579 "iscsi_delete_auth_group", 00:06:16.579 "iscsi_create_auth_group", 00:06:16.579 "iscsi_set_discovery_auth", 00:06:16.579 "iscsi_get_options", 00:06:16.579 "iscsi_target_node_request_logout", 00:06:16.579 "iscsi_target_node_set_redirect", 00:06:16.579 "iscsi_target_node_set_auth", 00:06:16.579 "iscsi_target_node_add_lun", 00:06:16.579 "iscsi_get_stats", 00:06:16.579 "iscsi_get_connections", 00:06:16.579 "iscsi_portal_group_set_auth", 00:06:16.579 "iscsi_start_portal_group", 00:06:16.579 "iscsi_delete_portal_group", 00:06:16.579 "iscsi_create_portal_group", 00:06:16.579 "iscsi_get_portal_groups", 00:06:16.579 "iscsi_delete_target_node", 00:06:16.579 "iscsi_target_node_remove_pg_ig_maps", 00:06:16.579 "iscsi_target_node_add_pg_ig_maps", 00:06:16.579 "iscsi_create_target_node", 00:06:16.579 "iscsi_get_target_nodes", 00:06:16.579 "iscsi_delete_initiator_group", 00:06:16.579 "iscsi_initiator_group_remove_initiators", 00:06:16.579 "iscsi_initiator_group_add_initiators", 00:06:16.579 "iscsi_create_initiator_group", 00:06:16.579 "iscsi_get_initiator_groups", 00:06:16.579 "nvmf_set_crdt", 00:06:16.579 "nvmf_set_config", 00:06:16.579 "nvmf_set_max_subsystems", 00:06:16.579 "nvmf_stop_mdns_prr", 00:06:16.579 "nvmf_publish_mdns_prr", 00:06:16.579 "nvmf_subsystem_get_listeners", 00:06:16.579 "nvmf_subsystem_get_qpairs", 00:06:16.579 "nvmf_subsystem_get_controllers", 00:06:16.579 "nvmf_get_stats", 00:06:16.579 "nvmf_get_transports", 00:06:16.579 "nvmf_create_transport", 00:06:16.579 "nvmf_get_targets", 00:06:16.579 "nvmf_delete_target", 00:06:16.579 "nvmf_create_target", 00:06:16.579 "nvmf_subsystem_allow_any_host", 00:06:16.579 "nvmf_subsystem_remove_host", 00:06:16.579 "nvmf_subsystem_add_host", 00:06:16.579 "nvmf_ns_remove_host", 00:06:16.579 "nvmf_ns_add_host", 00:06:16.579 "nvmf_subsystem_remove_ns", 00:06:16.579 "nvmf_subsystem_add_ns", 00:06:16.579 "nvmf_subsystem_listener_set_ana_state", 00:06:16.579 "nvmf_discovery_get_referrals", 00:06:16.579 "nvmf_discovery_remove_referral", 00:06:16.579 "nvmf_discovery_add_referral", 00:06:16.579 "nvmf_subsystem_remove_listener", 00:06:16.579 "nvmf_subsystem_add_listener", 00:06:16.579 "nvmf_delete_subsystem", 00:06:16.579 "nvmf_create_subsystem", 00:06:16.579 "nvmf_get_subsystems", 00:06:16.579 "env_dpdk_get_mem_stats", 00:06:16.579 "nbd_get_disks", 00:06:16.579 "nbd_stop_disk", 00:06:16.579 "nbd_start_disk", 00:06:16.579 "ublk_recover_disk", 00:06:16.579 "ublk_get_disks", 00:06:16.579 "ublk_stop_disk", 00:06:16.579 "ublk_start_disk", 00:06:16.579 "ublk_destroy_target", 00:06:16.579 "ublk_create_target", 00:06:16.579 "virtio_blk_create_transport", 00:06:16.579 "virtio_blk_get_transports", 00:06:16.579 "vhost_controller_set_coalescing", 00:06:16.579 "vhost_get_controllers", 00:06:16.579 "vhost_delete_controller", 00:06:16.579 "vhost_create_blk_controller", 00:06:16.579 "vhost_scsi_controller_remove_target", 00:06:16.579 "vhost_scsi_controller_add_target", 00:06:16.579 "vhost_start_scsi_controller", 00:06:16.579 "vhost_create_scsi_controller", 00:06:16.579 "thread_set_cpumask", 00:06:16.579 "framework_get_governor", 00:06:16.579 "framework_get_scheduler", 00:06:16.579 "framework_set_scheduler", 00:06:16.579 "framework_get_reactors", 00:06:16.579 "thread_get_io_channels", 00:06:16.579 "thread_get_pollers", 00:06:16.579 "thread_get_stats", 00:06:16.579 "framework_monitor_context_switch", 00:06:16.579 "spdk_kill_instance", 00:06:16.579 "log_enable_timestamps", 00:06:16.579 "log_get_flags", 00:06:16.579 "log_clear_flag", 00:06:16.579 "log_set_flag", 00:06:16.579 "log_get_level", 00:06:16.579 "log_set_level", 00:06:16.579 "log_get_print_level", 00:06:16.579 "log_set_print_level", 00:06:16.579 "framework_enable_cpumask_locks", 00:06:16.579 "framework_disable_cpumask_locks", 00:06:16.579 "framework_wait_init", 00:06:16.579 "framework_start_init", 00:06:16.579 "scsi_get_devices", 00:06:16.579 "bdev_get_histogram", 00:06:16.579 "bdev_enable_histogram", 00:06:16.579 "bdev_set_qos_limit", 00:06:16.579 "bdev_set_qd_sampling_period", 00:06:16.579 "bdev_get_bdevs", 00:06:16.579 "bdev_reset_iostat", 00:06:16.579 "bdev_get_iostat", 00:06:16.579 "bdev_examine", 00:06:16.579 "bdev_wait_for_examine", 00:06:16.579 "bdev_set_options", 00:06:16.579 "notify_get_notifications", 00:06:16.579 "notify_get_types", 00:06:16.579 "accel_get_stats", 00:06:16.579 "accel_set_options", 00:06:16.579 "accel_set_driver", 00:06:16.579 "accel_crypto_key_destroy", 00:06:16.579 "accel_crypto_keys_get", 00:06:16.579 "accel_crypto_key_create", 00:06:16.579 "accel_assign_opc", 00:06:16.579 "accel_get_module_info", 00:06:16.579 "accel_get_opc_assignments", 00:06:16.579 "vmd_rescan", 00:06:16.579 "vmd_remove_device", 00:06:16.579 "vmd_enable", 00:06:16.579 "sock_get_default_impl", 00:06:16.579 "sock_set_default_impl", 00:06:16.579 "sock_impl_set_options", 00:06:16.579 "sock_impl_get_options", 00:06:16.579 "iobuf_get_stats", 00:06:16.579 "iobuf_set_options", 00:06:16.579 "framework_get_pci_devices", 00:06:16.579 "framework_get_config", 00:06:16.579 "framework_get_subsystems", 00:06:16.579 "trace_get_info", 00:06:16.579 "trace_get_tpoint_group_mask", 00:06:16.579 "trace_disable_tpoint_group", 00:06:16.579 "trace_enable_tpoint_group", 00:06:16.579 "trace_clear_tpoint_mask", 00:06:16.579 "trace_set_tpoint_mask", 00:06:16.579 "keyring_get_keys", 00:06:16.579 "spdk_get_version", 00:06:16.579 "rpc_get_methods" 00:06:16.579 ] 00:06:16.579 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:16.579 01:49:31 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.579 01:49:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.838 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:16.838 01:49:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 73246 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 73246 ']' 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 73246 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73246 00:06:16.838 killing process with pid 73246 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73246' 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 73246 00:06:16.838 01:49:31 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 73246 00:06:17.096 ************************************ 00:06:17.096 END TEST spdkcli_tcp 00:06:17.096 ************************************ 00:06:17.096 00:06:17.096 real 0m1.018s 00:06:17.096 user 0m1.818s 00:06:17.096 sys 0m0.316s 00:06:17.096 01:49:32 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.096 01:49:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.096 01:49:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.096 01:49:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.096 01:49:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.096 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:06:17.096 ************************************ 00:06:17.096 START TEST dpdk_mem_utility 00:06:17.096 ************************************ 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.096 * Looking for test storage... 00:06:17.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:17.096 01:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:17.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.096 01:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73324 00:06:17.096 01:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73324 00:06:17.096 01:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 73324 ']' 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.096 01:49:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.096 [2024-07-25 01:49:32.341334] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:17.096 [2024-07-25 01:49:32.341709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:06:17.354 [2024-07-25 01:49:32.469071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.354 [2024-07-25 01:49:32.487976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.354 [2024-07-25 01:49:32.521714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.354 [2024-07-25 01:49:32.550115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.305 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.305 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:18.305 01:49:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.305 01:49:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.305 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.305 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.305 { 00:06:18.305 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.305 } 00:06:18.305 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.305 01:49:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.305 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:18.305 1 heaps totaling size 814.000000 MiB 00:06:18.305 size: 814.000000 MiB heap id: 0 00:06:18.305 end heaps---------- 00:06:18.305 8 mempools totaling size 598.116089 MiB 00:06:18.305 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.305 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.305 size: 84.521057 MiB name: bdev_io_73324 00:06:18.305 size: 51.011292 MiB name: evtpool_73324 00:06:18.305 size: 50.003479 MiB name: msgpool_73324 00:06:18.305 size: 21.763794 MiB name: PDU_Pool 00:06:18.305 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.305 size: 0.026123 MiB name: Session_Pool 00:06:18.305 end mempools------- 00:06:18.305 6 memzones totaling size 4.142822 MiB 00:06:18.305 size: 1.000366 MiB name: RG_ring_0_73324 00:06:18.305 size: 1.000366 MiB name: RG_ring_1_73324 00:06:18.305 size: 1.000366 MiB name: RG_ring_4_73324 00:06:18.305 size: 1.000366 MiB name: RG_ring_5_73324 00:06:18.305 size: 0.125366 MiB name: RG_ring_2_73324 00:06:18.305 size: 0.015991 MiB name: RG_ring_3_73324 00:06:18.305 end memzones------- 00:06:18.305 01:49:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.305 heap id: 0 total size: 814.000000 MiB number of busy elements: 289 number of free elements: 15 00:06:18.305 list of free elements. size: 12.473938 MiB 00:06:18.305 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:18.305 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:18.305 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:18.305 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:18.305 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:18.305 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:18.305 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:18.305 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:18.305 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:18.305 element at address: 0x20001aa00000 with size: 0.570618 MiB 00:06:18.305 element at address: 0x20000b200000 with size: 0.489624 MiB 00:06:18.305 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:18.305 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:18.305 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:18.305 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:18.305 list of standard malloc elements. size: 199.263489 MiB 00:06:18.305 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:18.305 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:18.305 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:18.305 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:18.305 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:18.305 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.305 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:18.305 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.305 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:18.305 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:18.305 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:18.305 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:18.306 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:18.306 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:18.307 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:18.307 list of memzone associated elements. size: 602.262573 MiB 00:06:18.307 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:18.307 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.307 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:18.307 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.307 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:18.307 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73324_0 00:06:18.307 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:18.307 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73324_0 00:06:18.307 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:18.307 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73324_0 00:06:18.307 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:18.307 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.307 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:18.307 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.307 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:18.307 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73324 00:06:18.307 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:18.307 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73324 00:06:18.307 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.307 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73324 00:06:18.307 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:18.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.307 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:18.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.307 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:18.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.307 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:18.307 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.307 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:18.307 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73324 00:06:18.307 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:18.307 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73324 00:06:18.307 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:18.307 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73324 00:06:18.307 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:18.307 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73324 00:06:18.307 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:18.307 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73324 00:06:18.307 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:18.307 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.307 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:18.307 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.307 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:18.307 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.307 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:18.307 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73324 00:06:18.307 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:18.307 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.307 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:18.307 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.307 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:18.307 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73324 00:06:18.307 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:18.307 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.307 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:18.307 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73324 00:06:18.307 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:18.307 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73324 00:06:18.307 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:18.307 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.307 01:49:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.307 01:49:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73324 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 73324 ']' 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 73324 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73324 00:06:18.307 killing process with pid 73324 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73324' 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 73324 00:06:18.307 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 73324 00:06:18.572 00:06:18.572 real 0m1.438s 00:06:18.572 user 0m1.632s 00:06:18.572 sys 0m0.307s 00:06:18.572 ************************************ 00:06:18.572 END TEST dpdk_mem_utility 00:06:18.572 ************************************ 00:06:18.572 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.572 01:49:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.572 01:49:33 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.572 01:49:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.572 01:49:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.572 01:49:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.572 ************************************ 00:06:18.572 START TEST event 00:06:18.572 ************************************ 00:06:18.572 01:49:33 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.572 * Looking for test storage... 00:06:18.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.572 01:49:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.572 01:49:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.572 01:49:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.572 01:49:33 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:18.572 01:49:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.572 01:49:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.572 ************************************ 00:06:18.572 START TEST event_perf 00:06:18.572 ************************************ 00:06:18.572 01:49:33 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.572 Running I/O for 1 seconds...[2024-07-25 01:49:33.790622] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:18.572 [2024-07-25 01:49:33.790723] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73401 ] 00:06:18.830 [2024-07-25 01:49:33.910603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.830 [2024-07-25 01:49:33.929607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.830 [2024-07-25 01:49:33.967296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.830 [2024-07-25 01:49:33.967435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.830 Running I/O for 1 seconds...[2024-07-25 01:49:33.967557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.830 [2024-07-25 01:49:33.967558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.766 00:06:19.766 lcore 0: 202654 00:06:19.766 lcore 1: 202652 00:06:19.766 lcore 2: 202654 00:06:19.766 lcore 3: 202653 00:06:19.766 done. 00:06:19.766 00:06:19.766 real 0m1.243s 00:06:19.766 user 0m4.076s 00:06:19.766 sys 0m0.046s 00:06:19.766 01:49:35 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.766 01:49:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.766 ************************************ 00:06:19.766 END TEST event_perf 00:06:19.766 ************************************ 00:06:19.766 01:49:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:19.766 01:49:35 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:19.766 01:49:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.766 01:49:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.025 ************************************ 00:06:20.025 START TEST event_reactor 00:06:20.025 ************************************ 00:06:20.025 01:49:35 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.025 [2024-07-25 01:49:35.086822] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:20.025 [2024-07-25 01:49:35.087081] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73434 ] 00:06:20.025 [2024-07-25 01:49:35.201992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.025 [2024-07-25 01:49:35.219898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.025 [2024-07-25 01:49:35.250352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.398 test_start 00:06:21.398 oneshot 00:06:21.398 tick 100 00:06:21.398 tick 100 00:06:21.398 tick 250 00:06:21.398 tick 100 00:06:21.398 tick 100 00:06:21.398 tick 100 00:06:21.398 tick 250 00:06:21.398 tick 500 00:06:21.398 tick 100 00:06:21.398 tick 100 00:06:21.398 tick 250 00:06:21.398 tick 100 00:06:21.398 tick 100 00:06:21.398 test_end 00:06:21.398 ************************************ 00:06:21.398 END TEST event_reactor 00:06:21.398 ************************************ 00:06:21.398 00:06:21.398 real 0m1.226s 00:06:21.398 user 0m1.085s 00:06:21.398 sys 0m0.036s 00:06:21.398 01:49:36 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.398 01:49:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:21.398 01:49:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.398 01:49:36 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:21.398 01:49:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.398 01:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.398 ************************************ 00:06:21.398 START TEST event_reactor_perf 00:06:21.398 ************************************ 00:06:21.398 01:49:36 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.399 [2024-07-25 01:49:36.367162] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:21.399 [2024-07-25 01:49:36.367295] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73464 ] 00:06:21.399 [2024-07-25 01:49:36.486592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.399 [2024-07-25 01:49:36.504148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.399 [2024-07-25 01:49:36.536319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.332 test_start 00:06:22.332 test_end 00:06:22.332 Performance: 439892 events per second 00:06:22.332 00:06:22.332 real 0m1.228s 00:06:22.332 user 0m1.082s 00:06:22.332 sys 0m0.040s 00:06:22.332 01:49:37 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.332 01:49:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.332 ************************************ 00:06:22.332 END TEST event_reactor_perf 00:06:22.332 ************************************ 00:06:22.332 01:49:37 event -- event/event.sh@49 -- # uname -s 00:06:22.332 01:49:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:22.332 01:49:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.332 01:49:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.332 01:49:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.332 01:49:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 ************************************ 00:06:22.591 START TEST event_scheduler 00:06:22.591 ************************************ 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.591 * Looking for test storage... 00:06:22.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:22.591 01:49:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.591 01:49:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73526 00:06:22.591 01:49:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.591 01:49:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73526 00:06:22.591 01:49:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 73526 ']' 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.591 01:49:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 [2024-07-25 01:49:37.768388] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:22.591 [2024-07-25 01:49:37.768467] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73526 ] 00:06:22.850 [2024-07-25 01:49:37.890545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.850 [2024-07-25 01:49:37.911947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.850 [2024-07-25 01:49:37.956567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.850 [2024-07-25 01:49:37.956711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.850 [2024-07-25 01:49:37.956816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.850 [2024-07-25 01:49:37.956819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:22.850 01:49:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.850 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.850 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.850 POWER: Cannot set governor of lcore 0 to performance 00:06:22.850 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.850 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.850 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.850 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.850 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:22.850 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:22.850 POWER: Unable to set Power Management Environment for lcore 0 00:06:22.850 [2024-07-25 01:49:38.032307] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:22.850 [2024-07-25 01:49:38.032430] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:22.850 [2024-07-25 01:49:38.032491] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:22.850 [2024-07-25 01:49:38.032645] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:22.850 [2024-07-25 01:49:38.032759] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:22.850 [2024-07-25 01:49:38.032817] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 [2024-07-25 01:49:38.071241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.850 [2024-07-25 01:49:38.088472] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 ************************************ 00:06:22.850 START TEST scheduler_create_thread 00:06:22.850 ************************************ 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 2 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 3 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 4 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 5 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 6 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.108 7 00:06:23.108 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 8 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 9 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 10 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.676 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.676 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.676 01:49:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.676 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.676 01:49:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.611 ************************************ 00:06:24.611 END TEST scheduler_create_thread 00:06:24.612 ************************************ 00:06:24.612 01:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.612 00:06:24.612 real 0m1.751s 00:06:24.612 user 0m0.013s 00:06:24.612 sys 0m0.006s 00:06:24.612 01:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.612 01:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.612 01:49:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.612 01:49:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73526 00:06:24.612 01:49:39 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 73526 ']' 00:06:24.612 01:49:39 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 73526 00:06:24.612 01:49:39 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:24.612 01:49:39 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.612 01:49:39 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73526 00:06:24.870 killing process with pid 73526 00:06:24.870 01:49:39 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.870 01:49:39 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.870 01:49:39 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73526' 00:06:24.870 01:49:39 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 73526 00:06:24.870 01:49:39 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 73526 00:06:25.128 [2024-07-25 01:49:40.330732] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.387 ************************************ 00:06:25.387 END TEST event_scheduler 00:06:25.387 ************************************ 00:06:25.387 00:06:25.387 real 0m2.825s 00:06:25.387 user 0m3.641s 00:06:25.387 sys 0m0.273s 00:06:25.387 01:49:40 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.387 01:49:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.387 01:49:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.387 01:49:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.387 01:49:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.387 01:49:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.387 01:49:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.387 ************************************ 00:06:25.387 START TEST app_repeat 00:06:25.387 ************************************ 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.387 Process app_repeat pid: 73601 00:06:25.387 spdk_app_start Round 0 00:06:25.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73601 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73601' 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.387 01:49:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73601 /var/tmp/spdk-nbd.sock 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73601 ']' 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.387 01:49:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.387 [2024-07-25 01:49:40.553312] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:25.387 [2024-07-25 01:49:40.553407] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73601 ] 00:06:25.387 [2024-07-25 01:49:40.676450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.645 [2024-07-25 01:49:40.688412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.646 [2024-07-25 01:49:40.728898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.646 [2024-07-25 01:49:40.728929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.646 [2024-07-25 01:49:40.758273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.211 01:49:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.211 01:49:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:26.212 01:49:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.469 Malloc0 00:06:26.469 01:49:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.728 Malloc1 00:06:26.728 01:49:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.728 01:49:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.729 01:49:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.988 /dev/nbd0 00:06:26.988 01:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.988 01:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.988 1+0 records in 00:06:26.988 1+0 records out 00:06:26.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209372 s, 19.6 MB/s 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.988 01:49:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.988 01:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.988 01:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.988 01:49:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.246 /dev/nbd1 00:06:27.246 01:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.246 01:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.247 1+0 records in 00:06:27.247 1+0 records out 00:06:27.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251756 s, 16.3 MB/s 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.247 01:49:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.247 01:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.247 01:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.247 01:49:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.247 01:49:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.247 01:49:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.506 { 00:06:27.506 "nbd_device": "/dev/nbd0", 00:06:27.506 "bdev_name": "Malloc0" 00:06:27.506 }, 00:06:27.506 { 00:06:27.506 "nbd_device": "/dev/nbd1", 00:06:27.506 "bdev_name": "Malloc1" 00:06:27.506 } 00:06:27.506 ]' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.506 { 00:06:27.506 "nbd_device": "/dev/nbd0", 00:06:27.506 "bdev_name": "Malloc0" 00:06:27.506 }, 00:06:27.506 { 00:06:27.506 "nbd_device": "/dev/nbd1", 00:06:27.506 "bdev_name": "Malloc1" 00:06:27.506 } 00:06:27.506 ]' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.506 /dev/nbd1' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.506 /dev/nbd1' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.506 256+0 records in 00:06:27.506 256+0 records out 00:06:27.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494103 s, 212 MB/s 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.506 256+0 records in 00:06:27.506 256+0 records out 00:06:27.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021572 s, 48.6 MB/s 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.506 256+0 records in 00:06:27.506 256+0 records out 00:06:27.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232972 s, 45.0 MB/s 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.506 01:49:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.765 01:49:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.765 01:49:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.024 01:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.282 01:49:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.282 01:49:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.541 01:49:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.799 [2024-07-25 01:49:43.910990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.799 [2024-07-25 01:49:43.941265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.799 [2024-07-25 01:49:43.941276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.799 [2024-07-25 01:49:43.969364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.799 [2024-07-25 01:49:43.969451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.799 [2024-07-25 01:49:43.969464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.082 spdk_app_start Round 1 00:06:32.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.082 01:49:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.082 01:49:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:32.082 01:49:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73601 /var/tmp/spdk-nbd.sock 00:06:32.082 01:49:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73601 ']' 00:06:32.083 01:49:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.083 01:49:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.083 01:49:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.083 01:49:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.083 01:49:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.083 01:49:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.083 01:49:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.083 01:49:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.083 Malloc0 00:06:32.083 01:49:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.341 Malloc1 00:06:32.341 01:49:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.341 01:49:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.599 /dev/nbd0 00:06:32.599 01:49:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.599 01:49:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.599 1+0 records in 00:06:32.599 1+0 records out 00:06:32.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191889 s, 21.3 MB/s 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.599 01:49:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:32.599 01:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.599 01:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.599 01:49:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.857 /dev/nbd1 00:06:32.857 01:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.857 01:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.857 01:49:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.857 1+0 records in 00:06:32.857 1+0 records out 00:06:32.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331534 s, 12.4 MB/s 00:06:32.858 01:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.858 01:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:32.858 01:49:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.858 01:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.858 01:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:32.858 01:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.858 01:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.858 01:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.858 01:49:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.858 01:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.116 { 00:06:33.116 "nbd_device": "/dev/nbd0", 00:06:33.116 "bdev_name": "Malloc0" 00:06:33.116 }, 00:06:33.116 { 00:06:33.116 "nbd_device": "/dev/nbd1", 00:06:33.116 "bdev_name": "Malloc1" 00:06:33.116 } 00:06:33.116 ]' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.116 { 00:06:33.116 "nbd_device": "/dev/nbd0", 00:06:33.116 "bdev_name": "Malloc0" 00:06:33.116 }, 00:06:33.116 { 00:06:33.116 "nbd_device": "/dev/nbd1", 00:06:33.116 "bdev_name": "Malloc1" 00:06:33.116 } 00:06:33.116 ]' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.116 /dev/nbd1' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.116 /dev/nbd1' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.116 01:49:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.375 256+0 records in 00:06:33.375 256+0 records out 00:06:33.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108454 s, 96.7 MB/s 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.375 256+0 records in 00:06:33.375 256+0 records out 00:06:33.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209072 s, 50.2 MB/s 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.375 256+0 records in 00:06:33.375 256+0 records out 00:06:33.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249458 s, 42.0 MB/s 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.375 01:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.376 01:49:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.634 01:49:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.893 01:49:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.893 01:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.893 01:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.893 01:49:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.893 01:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.151 01:49:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.151 01:49:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.410 01:49:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.410 [2024-07-25 01:49:49.643159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.410 [2024-07-25 01:49:49.673383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.410 [2024-07-25 01:49:49.673388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.410 [2024-07-25 01:49:49.700416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.410 [2024-07-25 01:49:49.700514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.410 [2024-07-25 01:49:49.700527] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.725 01:49:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.725 spdk_app_start Round 2 00:06:37.725 01:49:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.725 01:49:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73601 /var/tmp/spdk-nbd.sock 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73601 ']' 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.725 01:49:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:37.725 01:49:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.983 Malloc0 00:06:37.983 01:49:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.983 Malloc1 00:06:37.983 01:49:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.983 01:49:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.244 /dev/nbd0 00:06:38.244 01:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.244 01:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.244 1+0 records in 00:06:38.244 1+0 records out 00:06:38.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247099 s, 16.6 MB/s 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.244 01:49:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:38.244 01:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.245 01:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.245 01:49:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.507 /dev/nbd1 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.507 1+0 records in 00:06:38.507 1+0 records out 00:06:38.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395039 s, 10.4 MB/s 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.507 01:49:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.507 01:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.765 01:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.765 { 00:06:38.765 "nbd_device": "/dev/nbd0", 00:06:38.765 "bdev_name": "Malloc0" 00:06:38.765 }, 00:06:38.765 { 00:06:38.765 "nbd_device": "/dev/nbd1", 00:06:38.765 "bdev_name": "Malloc1" 00:06:38.765 } 00:06:38.765 ]' 00:06:38.765 01:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.765 { 00:06:38.765 "nbd_device": "/dev/nbd0", 00:06:38.765 "bdev_name": "Malloc0" 00:06:38.765 }, 00:06:38.765 { 00:06:38.765 "nbd_device": "/dev/nbd1", 00:06:38.765 "bdev_name": "Malloc1" 00:06:38.765 } 00:06:38.765 ]' 00:06:38.765 01:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.023 /dev/nbd1' 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.023 /dev/nbd1' 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.023 01:49:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.024 256+0 records in 00:06:39.024 256+0 records out 00:06:39.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107357 s, 97.7 MB/s 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.024 256+0 records in 00:06:39.024 256+0 records out 00:06:39.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226846 s, 46.2 MB/s 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.024 256+0 records in 00:06:39.024 256+0 records out 00:06:39.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257786 s, 40.7 MB/s 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.024 01:49:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.282 01:49:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.540 01:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.797 01:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.798 01:49:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.798 01:49:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.056 01:49:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.056 [2024-07-25 01:49:55.337690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.313 [2024-07-25 01:49:55.368601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.313 [2024-07-25 01:49:55.368610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.313 [2024-07-25 01:49:55.395313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.313 [2024-07-25 01:49:55.395399] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.313 [2024-07-25 01:49:55.395411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.595 01:49:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73601 /var/tmp/spdk-nbd.sock 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 73601 ']' 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:43.595 01:49:58 event.app_repeat -- event/event.sh@39 -- # killprocess 73601 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 73601 ']' 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 73601 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73601 00:06:43.595 killing process with pid 73601 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73601' 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@969 -- # kill 73601 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@974 -- # wait 73601 00:06:43.595 spdk_app_start is called in Round 0. 00:06:43.595 Shutdown signal received, stop current app iteration 00:06:43.595 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 reinitialization... 00:06:43.595 spdk_app_start is called in Round 1. 00:06:43.595 Shutdown signal received, stop current app iteration 00:06:43.595 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 reinitialization... 00:06:43.595 spdk_app_start is called in Round 2. 00:06:43.595 Shutdown signal received, stop current app iteration 00:06:43.595 Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 reinitialization... 00:06:43.595 spdk_app_start is called in Round 3. 00:06:43.595 Shutdown signal received, stop current app iteration 00:06:43.595 01:49:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.595 01:49:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.595 00:06:43.595 real 0m18.129s 00:06:43.595 user 0m41.083s 00:06:43.595 sys 0m2.390s 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.595 01:49:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.595 ************************************ 00:06:43.595 END TEST app_repeat 00:06:43.595 ************************************ 00:06:43.595 01:49:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.595 01:49:58 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.595 01:49:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.595 01:49:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.595 01:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.595 ************************************ 00:06:43.595 START TEST cpu_locks 00:06:43.595 ************************************ 00:06:43.596 01:49:58 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.596 * Looking for test storage... 00:06:43.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.596 01:49:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.596 01:49:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.596 01:49:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.596 01:49:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.596 01:49:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.596 01:49:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.596 01:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.596 ************************************ 00:06:43.596 START TEST default_locks 00:06:43.596 ************************************ 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=74028 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 74028 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 74028 ']' 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.596 01:49:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.596 [2024-07-25 01:49:58.836488] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:43.596 [2024-07-25 01:49:58.836591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74028 ] 00:06:43.854 [2024-07-25 01:49:58.951741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.854 [2024-07-25 01:49:58.966361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.854 [2024-07-25 01:49:58.999488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.854 [2024-07-25 01:49:59.026205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.854 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.854 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:43.854 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 74028 00:06:43.854 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 74028 00:06:43.854 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 74028 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 74028 ']' 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 74028 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74028 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.421 killing process with pid 74028 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74028' 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 74028 00:06:44.421 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 74028 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 74028 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74028 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 74028 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 74028 ']' 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74028) - No such process 00:06:44.680 ERROR: process (pid: 74028) is no longer running 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.680 00:06:44.680 real 0m1.005s 00:06:44.680 user 0m1.077s 00:06:44.680 sys 0m0.394s 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.680 01:49:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 ************************************ 00:06:44.680 END TEST default_locks 00:06:44.680 ************************************ 00:06:44.680 01:49:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.680 01:49:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.680 01:49:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.680 01:49:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 ************************************ 00:06:44.680 START TEST default_locks_via_rpc 00:06:44.680 ************************************ 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=74067 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 74067 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74067 ']' 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.680 01:49:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 [2024-07-25 01:49:59.907453] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:44.680 [2024-07-25 01:49:59.907569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74067 ] 00:06:44.939 [2024-07-25 01:50:00.029435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.939 [2024-07-25 01:50:00.039818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.939 [2024-07-25 01:50:00.071916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.939 [2024-07-25 01:50:00.099088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 74067 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 74067 00:06:45.875 01:50:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 74067 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 74067 ']' 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 74067 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74067 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.132 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.132 killing process with pid 74067 00:06:46.133 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74067' 00:06:46.133 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 74067 00:06:46.133 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 74067 00:06:46.391 00:06:46.391 real 0m1.636s 00:06:46.391 user 0m1.861s 00:06:46.391 sys 0m0.414s 00:06:46.391 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.391 01:50:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.391 ************************************ 00:06:46.391 END TEST default_locks_via_rpc 00:06:46.391 ************************************ 00:06:46.391 01:50:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:46.391 01:50:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.391 01:50:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.391 01:50:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.391 ************************************ 00:06:46.391 START TEST non_locking_app_on_locked_coremask 00:06:46.391 ************************************ 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=74115 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 74115 /var/tmp/spdk.sock 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74115 ']' 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.391 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.391 [2024-07-25 01:50:01.587598] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:46.391 [2024-07-25 01:50:01.587704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74115 ] 00:06:46.650 [2024-07-25 01:50:01.703193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.650 [2024-07-25 01:50:01.720276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.650 [2024-07-25 01:50:01.752118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.650 [2024-07-25 01:50:01.780393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74123 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 74123 /var/tmp/spdk2.sock 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74123 ']' 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.650 01:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.650 [2024-07-25 01:50:01.936563] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:46.650 [2024-07-25 01:50:01.936664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74123 ] 00:06:46.909 [2024-07-25 01:50:02.053324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.909 [2024-07-25 01:50:02.072292] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.909 [2024-07-25 01:50:02.072338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.909 [2024-07-25 01:50:02.142481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.909 [2024-07-25 01:50:02.200886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.168 01:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.168 01:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.168 01:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 74115 00:06:47.168 01:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74115 00:06:47.168 01:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 74115 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74115 ']' 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 74115 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74115 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.103 killing process with pid 74115 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74115' 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 74115 00:06:48.103 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 74115 00:06:48.362 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 74123 00:06:48.362 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74123 ']' 00:06:48.362 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 74123 00:06:48.362 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.362 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.362 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74123 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.621 killing process with pid 74123 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74123' 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 74123 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 74123 00:06:48.621 00:06:48.621 real 0m2.341s 00:06:48.621 user 0m2.612s 00:06:48.621 sys 0m0.804s 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.621 01:50:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.621 ************************************ 00:06:48.621 END TEST non_locking_app_on_locked_coremask 00:06:48.621 ************************************ 00:06:48.621 01:50:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.621 01:50:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.621 01:50:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.621 01:50:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.880 ************************************ 00:06:48.880 START TEST locking_app_on_unlocked_coremask 00:06:48.880 ************************************ 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74177 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 74177 /var/tmp/spdk.sock 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74177 ']' 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.880 01:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.880 [2024-07-25 01:50:03.989160] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:48.880 [2024-07-25 01:50:03.989714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74177 ] 00:06:48.880 [2024-07-25 01:50:04.110991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.880 [2024-07-25 01:50:04.129174] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.880 [2024-07-25 01:50:04.129213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.880 [2024-07-25 01:50:04.162338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.139 [2024-07-25 01:50:04.193448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74186 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 74186 /var/tmp/spdk2.sock 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74186 ']' 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.139 01:50:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.139 [2024-07-25 01:50:04.376278] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:49.139 [2024-07-25 01:50:04.376367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74186 ] 00:06:49.398 [2024-07-25 01:50:04.500279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.398 [2024-07-25 01:50:04.514520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.398 [2024-07-25 01:50:04.580487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.398 [2024-07-25 01:50:04.634404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.334 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.334 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.334 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 74186 00:06:50.334 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74186 00:06:50.334 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 74177 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74177 ']' 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 74177 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74177 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.593 killing process with pid 74177 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74177' 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 74177 00:06:50.593 01:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 74177 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 74186 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74186 ']' 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 74186 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74186 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.161 killing process with pid 74186 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74186' 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 74186 00:06:51.161 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 74186 00:06:51.420 00:06:51.420 real 0m2.603s 00:06:51.420 user 0m3.034s 00:06:51.420 sys 0m0.727s 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.420 ************************************ 00:06:51.420 END TEST locking_app_on_unlocked_coremask 00:06:51.420 ************************************ 00:06:51.420 01:50:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.420 01:50:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.420 01:50:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.420 01:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.420 ************************************ 00:06:51.420 START TEST locking_app_on_locked_coremask 00:06:51.420 ************************************ 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74242 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 74242 /var/tmp/spdk.sock 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74242 ']' 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.420 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.420 [2024-07-25 01:50:06.645129] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:51.420 [2024-07-25 01:50:06.645216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74242 ] 00:06:51.679 [2024-07-25 01:50:06.765912] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.679 [2024-07-25 01:50:06.786880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.679 [2024-07-25 01:50:06.827292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.679 [2024-07-25 01:50:06.859378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74246 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74246 /var/tmp/spdk2.sock 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74246 /var/tmp/spdk2.sock 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 74246 /var/tmp/spdk2.sock 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 74246 ']' 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.938 01:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.938 [2024-07-25 01:50:07.035939] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:51.938 [2024-07-25 01:50:07.036049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74246 ] 00:06:51.938 [2024-07-25 01:50:07.157238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.938 [2024-07-25 01:50:07.175212] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74242 has claimed it. 00:06:51.938 [2024-07-25 01:50:07.175270] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.504 ERROR: process (pid: 74246) is no longer running 00:06:52.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74246) - No such process 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 74242 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74242 00:06:52.504 01:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 74242 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 74242 ']' 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 74242 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74242 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.070 killing process with pid 74242 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74242' 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 74242 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 74242 00:06:53.070 00:06:53.070 real 0m1.771s 00:06:53.070 user 0m2.072s 00:06:53.070 sys 0m0.474s 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.070 01:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.070 ************************************ 00:06:53.070 END TEST locking_app_on_locked_coremask 00:06:53.070 ************************************ 00:06:53.329 01:50:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.329 01:50:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.329 01:50:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.329 01:50:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.329 ************************************ 00:06:53.329 START TEST locking_overlapped_coremask 00:06:53.329 ************************************ 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74297 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74297 /var/tmp/spdk.sock 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 74297 ']' 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.329 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.329 [2024-07-25 01:50:08.464360] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:53.330 [2024-07-25 01:50:08.464452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74297 ] 00:06:53.330 [2024-07-25 01:50:08.586176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.330 [2024-07-25 01:50:08.601734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.589 [2024-07-25 01:50:08.637259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.589 [2024-07-25 01:50:08.637395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.589 [2024-07-25 01:50:08.637400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.589 [2024-07-25 01:50:08.665102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74302 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74302 /var/tmp/spdk2.sock 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 74302 /var/tmp/spdk2.sock 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 74302 /var/tmp/spdk2.sock 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 74302 ']' 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.589 01:50:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.589 [2024-07-25 01:50:08.844665] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:53.589 [2024-07-25 01:50:08.844758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:06:53.859 [2024-07-25 01:50:08.969652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.860 [2024-07-25 01:50:08.987139] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74297 has claimed it. 00:06:53.860 [2024-07-25 01:50:08.987219] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.440 ERROR: process (pid: 74302) is no longer running 00:06:54.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (74302) - No such process 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74297 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 74297 ']' 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 74297 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74297 00:06:54.440 killing process with pid 74297 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74297' 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 74297 00:06:54.440 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 74297 00:06:54.699 ************************************ 00:06:54.699 END TEST locking_overlapped_coremask 00:06:54.699 ************************************ 00:06:54.699 00:06:54.699 real 0m1.394s 00:06:54.699 user 0m3.802s 00:06:54.699 sys 0m0.290s 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.699 01:50:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:54.699 01:50:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.699 01:50:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.699 01:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.699 ************************************ 00:06:54.699 START TEST locking_overlapped_coremask_via_rpc 00:06:54.699 ************************************ 00:06:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74342 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74342 /var/tmp/spdk.sock 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74342 ']' 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.699 01:50:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.699 [2024-07-25 01:50:09.897878] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:54.699 [2024-07-25 01:50:09.897981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74342 ] 00:06:54.958 [2024-07-25 01:50:10.017595] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.958 [2024-07-25 01:50:10.032899] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.958 [2024-07-25 01:50:10.033095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.958 [2024-07-25 01:50:10.067740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.958 [2024-07-25 01:50:10.067848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.958 [2024-07-25 01:50:10.067855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.958 [2024-07-25 01:50:10.096120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.523 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.523 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74360 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74360 /var/tmp/spdk2.sock 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74360 ']' 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.782 01:50:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.782 [2024-07-25 01:50:10.885072] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:55.782 [2024-07-25 01:50:10.885383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74360 ] 00:06:55.782 [2024-07-25 01:50:11.009590] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.782 [2024-07-25 01:50:11.030029] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.782 [2024-07-25 01:50:11.030076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.041 [2024-07-25 01:50:11.098343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.041 [2024-07-25 01:50:11.101951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.041 [2024-07-25 01:50:11.101953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.041 [2024-07-25 01:50:11.151905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.608 [2024-07-25 01:50:11.821036] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74342 has claimed it. 00:06:56.608 request: 00:06:56.608 { 00:06:56.608 "method": "framework_enable_cpumask_locks", 00:06:56.608 "req_id": 1 00:06:56.608 } 00:06:56.608 Got JSON-RPC error response 00:06:56.608 response: 00:06:56.608 { 00:06:56.608 "code": -32603, 00:06:56.608 "message": "Failed to claim CPU core: 2" 00:06:56.608 } 00:06:56.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74342 /var/tmp/spdk.sock 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74342 ']' 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.608 01:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74360 /var/tmp/spdk2.sock 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 74360 ']' 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.867 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.126 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.126 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.126 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.126 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.126 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.127 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.127 00:06:57.127 real 0m2.477s 00:06:57.127 user 0m1.215s 00:06:57.127 sys 0m0.187s 00:06:57.127 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.127 ************************************ 00:06:57.127 END TEST locking_overlapped_coremask_via_rpc 00:06:57.127 ************************************ 00:06:57.127 01:50:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.127 01:50:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.127 01:50:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74342 ]] 00:06:57.127 01:50:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74342 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74342 ']' 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74342 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74342 00:06:57.127 killing process with pid 74342 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74342' 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 74342 00:06:57.127 01:50:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 74342 00:06:57.386 01:50:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74360 ]] 00:06:57.386 01:50:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74360 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74360 ']' 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74360 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74360 00:06:57.386 killing process with pid 74360 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74360' 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 74360 00:06:57.386 01:50:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 74360 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.645 Process with pid 74342 is not found 00:06:57.645 Process with pid 74360 is not found 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74342 ]] 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74342 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74342 ']' 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74342 00:06:57.645 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74342) - No such process 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 74342 is not found' 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74360 ]] 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74360 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 74360 ']' 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 74360 00:06:57.645 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74360) - No such process 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 74360 is not found' 00:06:57.645 01:50:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.645 ************************************ 00:06:57.645 END TEST cpu_locks 00:06:57.645 ************************************ 00:06:57.645 00:06:57.645 real 0m14.174s 00:06:57.645 user 0m26.850s 00:06:57.645 sys 0m3.893s 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.645 01:50:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.645 00:06:57.645 real 0m39.220s 00:06:57.645 user 1m17.945s 00:06:57.645 sys 0m6.909s 00:06:57.645 01:50:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.645 01:50:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.645 ************************************ 00:06:57.645 END TEST event 00:06:57.645 ************************************ 00:06:57.904 01:50:12 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.904 01:50:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.904 01:50:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.904 01:50:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.904 ************************************ 00:06:57.904 START TEST thread 00:06:57.904 ************************************ 00:06:57.904 01:50:12 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.904 * Looking for test storage... 00:06:57.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.904 01:50:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.904 01:50:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:57.904 01:50:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.904 01:50:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.904 ************************************ 00:06:57.904 START TEST thread_poller_perf 00:06:57.904 ************************************ 00:06:57.904 01:50:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.904 [2024-07-25 01:50:13.062918] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:57.904 [2024-07-25 01:50:13.062999] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74478 ] 00:06:57.904 [2024-07-25 01:50:13.177348] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.904 [2024-07-25 01:50:13.195777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.172 [2024-07-25 01:50:13.228302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.172 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.108 ====================================== 00:06:59.108 busy:2209873292 (cyc) 00:06:59.108 total_run_count: 374000 00:06:59.108 tsc_hz: 2200000000 (cyc) 00:06:59.108 ====================================== 00:06:59.108 poller_cost: 5908 (cyc), 2685 (nsec) 00:06:59.108 00:06:59.108 real 0m1.242s 00:06:59.108 user 0m1.097s 00:06:59.108 sys 0m0.038s 00:06:59.108 01:50:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.108 01:50:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.108 ************************************ 00:06:59.108 END TEST thread_poller_perf 00:06:59.108 ************************************ 00:06:59.108 01:50:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.108 01:50:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.108 01:50:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.108 01:50:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.108 ************************************ 00:06:59.108 START TEST thread_poller_perf 00:06:59.108 ************************************ 00:06:59.108 01:50:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.108 [2024-07-25 01:50:14.353241] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:06:59.109 [2024-07-25 01:50:14.353328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74513 ] 00:06:59.367 [2024-07-25 01:50:14.472977] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.367 [2024-07-25 01:50:14.490049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.367 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.367 [2024-07-25 01:50:14.529397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.300 ====================================== 00:07:00.300 busy:2201966392 (cyc) 00:07:00.300 total_run_count: 4853000 00:07:00.300 tsc_hz: 2200000000 (cyc) 00:07:00.300 ====================================== 00:07:00.300 poller_cost: 453 (cyc), 205 (nsec) 00:07:00.300 ************************************ 00:07:00.300 END TEST thread_poller_perf 00:07:00.300 ************************************ 00:07:00.300 00:07:00.300 real 0m1.245s 00:07:00.300 user 0m1.099s 00:07:00.300 sys 0m0.039s 00:07:00.300 01:50:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.300 01:50:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.559 01:50:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.559 ************************************ 00:07:00.559 END TEST thread 00:07:00.559 ************************************ 00:07:00.559 00:07:00.559 real 0m2.666s 00:07:00.559 user 0m2.266s 00:07:00.559 sys 0m0.177s 00:07:00.559 01:50:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.559 01:50:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.559 01:50:15 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:00.559 01:50:15 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.559 01:50:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.559 01:50:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.559 01:50:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.559 ************************************ 00:07:00.559 START TEST app_cmdline 00:07:00.559 ************************************ 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.559 * Looking for test storage... 00:07:00.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.559 01:50:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.559 01:50:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74590 00:07:00.559 01:50:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74590 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 74590 ']' 00:07:00.559 01:50:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.559 01:50:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.559 [2024-07-25 01:50:15.813628] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:00.559 [2024-07-25 01:50:15.813720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74590 ] 00:07:00.818 [2024-07-25 01:50:15.935898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.818 [2024-07-25 01:50:15.953161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.818 [2024-07-25 01:50:15.985790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.818 [2024-07-25 01:50:16.012169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.818 01:50:16 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.818 01:50:16 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:00.818 01:50:16 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:01.383 { 00:07:01.383 "version": "SPDK v24.09-pre git sha1 d005e023b", 00:07:01.383 "fields": { 00:07:01.383 "major": 24, 00:07:01.383 "minor": 9, 00:07:01.383 "patch": 0, 00:07:01.383 "suffix": "-pre", 00:07:01.383 "commit": "d005e023b" 00:07:01.383 } 00:07:01.383 } 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.383 01:50:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:01.383 01:50:16 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.383 request: 00:07:01.383 { 00:07:01.383 "method": "env_dpdk_get_mem_stats", 00:07:01.383 "req_id": 1 00:07:01.383 } 00:07:01.383 Got JSON-RPC error response 00:07:01.383 response: 00:07:01.383 { 00:07:01.383 "code": -32601, 00:07:01.383 "message": "Method not found" 00:07:01.383 } 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.642 01:50:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74590 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 74590 ']' 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 74590 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74590 00:07:01.642 killing process with pid 74590 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74590' 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@969 -- # kill 74590 00:07:01.642 01:50:16 app_cmdline -- common/autotest_common.sh@974 -- # wait 74590 00:07:01.901 00:07:01.901 real 0m1.285s 00:07:01.901 user 0m1.716s 00:07:01.901 sys 0m0.322s 00:07:01.901 01:50:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.901 ************************************ 00:07:01.901 END TEST app_cmdline 00:07:01.901 ************************************ 00:07:01.901 01:50:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 01:50:16 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:01.901 01:50:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.901 01:50:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.901 01:50:16 -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 ************************************ 00:07:01.901 START TEST version 00:07:01.901 ************************************ 00:07:01.901 01:50:17 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:01.901 * Looking for test storage... 00:07:01.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.901 01:50:17 version -- app/version.sh@17 -- # get_header_version major 00:07:01.901 01:50:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # cut -f2 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.901 01:50:17 version -- app/version.sh@17 -- # major=24 00:07:01.901 01:50:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.901 01:50:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # cut -f2 00:07:01.901 01:50:17 version -- app/version.sh@18 -- # minor=9 00:07:01.901 01:50:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.901 01:50:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # cut -f2 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.901 01:50:17 version -- app/version.sh@19 -- # patch=0 00:07:01.901 01:50:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # cut -f2 00:07:01.901 01:50:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:01.901 01:50:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.901 01:50:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.901 01:50:17 version -- app/version.sh@22 -- # version=24.9 00:07:01.901 01:50:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.901 01:50:17 version -- app/version.sh@28 -- # version=24.9rc0 00:07:01.901 01:50:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:01.901 01:50:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.901 01:50:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:01.901 01:50:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:01.901 00:07:01.901 real 0m0.151s 00:07:01.901 user 0m0.088s 00:07:01.901 sys 0m0.094s 00:07:01.901 ************************************ 00:07:01.901 END TEST version 00:07:01.901 ************************************ 00:07:01.901 01:50:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.901 01:50:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 01:50:17 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:01.901 01:50:17 -- spdk/autotest.sh@202 -- # uname -s 00:07:02.159 01:50:17 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:02.159 01:50:17 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:02.159 01:50:17 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:07:02.159 01:50:17 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:07:02.159 01:50:17 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:02.159 01:50:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.159 01:50:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.159 01:50:17 -- common/autotest_common.sh@10 -- # set +x 00:07:02.159 ************************************ 00:07:02.159 START TEST spdk_dd 00:07:02.159 ************************************ 00:07:02.159 01:50:17 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:02.159 * Looking for test storage... 00:07:02.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:02.159 01:50:17 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.159 01:50:17 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.159 01:50:17 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.159 01:50:17 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.159 01:50:17 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.159 01:50:17 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.159 01:50:17 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.159 01:50:17 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:02.159 01:50:17 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.159 01:50:17 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:02.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.416 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:02.416 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:02.416 01:50:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:02.416 01:50:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@230 -- # local class 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@232 -- # local progif 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@233 -- # class=01 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:07:02.416 01:50:17 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:07:02.417 01:50:17 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:02.676 01:50:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.676 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:02.677 * spdk_dd linked to liburing 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:02.677 01:50:17 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:02.677 01:50:17 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:02.678 01:50:17 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:07:02.678 01:50:17 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:02.678 01:50:17 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:02.678 01:50:17 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:02.678 01:50:17 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:02.678 01:50:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:02.678 01:50:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:02.678 01:50:17 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.678 01:50:17 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.678 01:50:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:02.678 ************************************ 00:07:02.678 START TEST spdk_dd_basic_rw 00:07:02.678 ************************************ 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:02.678 * Looking for test storage... 00:07:02.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:02.678 01:50:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.940 ************************************ 00:07:02.940 START TEST dd_bs_lt_native_bs 00:07:02.940 ************************************ 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.940 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.941 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.941 { 00:07:02.941 "subsystems": [ 00:07:02.941 { 00:07:02.941 "subsystem": "bdev", 00:07:02.941 "config": [ 00:07:02.941 { 00:07:02.941 "params": { 00:07:02.941 "trtype": "pcie", 00:07:02.941 "traddr": "0000:00:10.0", 00:07:02.941 "name": "Nvme0" 00:07:02.941 }, 00:07:02.941 "method": "bdev_nvme_attach_controller" 00:07:02.941 }, 00:07:02.941 { 00:07:02.941 "method": "bdev_wait_for_examine" 00:07:02.941 } 00:07:02.941 ] 00:07:02.941 } 00:07:02.941 ] 00:07:02.941 } 00:07:02.941 [2024-07-25 01:50:18.130799] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:02.941 [2024-07-25 01:50:18.131144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74899 ] 00:07:03.199 [2024-07-25 01:50:18.252540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.199 [2024-07-25 01:50:18.274113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.199 [2024-07-25 01:50:18.314365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.199 [2024-07-25 01:50:18.346854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.199 [2024-07-25 01:50:18.435975] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:03.199 [2024-07-25 01:50:18.436058] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.457 [2024-07-25 01:50:18.507352] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:03.457 ************************************ 00:07:03.457 END TEST dd_bs_lt_native_bs 00:07:03.457 ************************************ 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.457 00:07:03.457 real 0m0.523s 00:07:03.457 user 0m0.364s 00:07:03.457 sys 0m0.114s 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.457 ************************************ 00:07:03.457 START TEST dd_rw 00:07:03.457 ************************************ 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:03.457 01:50:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:04.023 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:04.023 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.023 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.023 [2024-07-25 01:50:19.304739] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:04.023 [2024-07-25 01:50:19.305026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74930 ] 00:07:04.023 { 00:07:04.023 "subsystems": [ 00:07:04.023 { 00:07:04.023 "subsystem": "bdev", 00:07:04.023 "config": [ 00:07:04.023 { 00:07:04.023 "params": { 00:07:04.023 "trtype": "pcie", 00:07:04.023 "traddr": "0000:00:10.0", 00:07:04.023 "name": "Nvme0" 00:07:04.023 }, 00:07:04.023 "method": "bdev_nvme_attach_controller" 00:07:04.023 }, 00:07:04.023 { 00:07:04.023 "method": "bdev_wait_for_examine" 00:07:04.023 } 00:07:04.023 ] 00:07:04.023 } 00:07:04.023 ] 00:07:04.023 } 00:07:04.281 [2024-07-25 01:50:19.427039] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.281 [2024-07-25 01:50:19.443957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.281 [2024-07-25 01:50:19.484534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.281 [2024-07-25 01:50:19.518221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.539  Copying: 60/60 [kB] (average 19 MBps) 00:07:04.539 00:07:04.539 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:04.539 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:04.539 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.539 01:50:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.539 { 00:07:04.539 "subsystems": [ 00:07:04.539 { 00:07:04.539 "subsystem": "bdev", 00:07:04.539 "config": [ 00:07:04.539 { 00:07:04.539 "params": { 00:07:04.539 "trtype": "pcie", 00:07:04.539 "traddr": "0000:00:10.0", 00:07:04.539 "name": "Nvme0" 00:07:04.539 }, 00:07:04.539 "method": "bdev_nvme_attach_controller" 00:07:04.539 }, 00:07:04.539 { 00:07:04.539 "method": "bdev_wait_for_examine" 00:07:04.539 } 00:07:04.539 ] 00:07:04.539 } 00:07:04.539 ] 00:07:04.539 } 00:07:04.539 [2024-07-25 01:50:19.783141] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:04.539 [2024-07-25 01:50:19.783237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74949 ] 00:07:04.797 [2024-07-25 01:50:19.903399] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.797 [2024-07-25 01:50:19.919140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.797 [2024-07-25 01:50:19.952710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.797 [2024-07-25 01:50:19.979581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.056  Copying: 60/60 [kB] (average 19 MBps) 00:07:05.056 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.056 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.056 [2024-07-25 01:50:20.247215] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:05.056 [2024-07-25 01:50:20.247294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74959 ] 00:07:05.056 { 00:07:05.056 "subsystems": [ 00:07:05.056 { 00:07:05.056 "subsystem": "bdev", 00:07:05.056 "config": [ 00:07:05.056 { 00:07:05.056 "params": { 00:07:05.056 "trtype": "pcie", 00:07:05.056 "traddr": "0000:00:10.0", 00:07:05.056 "name": "Nvme0" 00:07:05.056 }, 00:07:05.056 "method": "bdev_nvme_attach_controller" 00:07:05.056 }, 00:07:05.056 { 00:07:05.056 "method": "bdev_wait_for_examine" 00:07:05.056 } 00:07:05.056 ] 00:07:05.056 } 00:07:05.056 ] 00:07:05.056 } 00:07:05.315 [2024-07-25 01:50:20.367987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.315 [2024-07-25 01:50:20.383542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.315 [2024-07-25 01:50:20.413967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.315 [2024-07-25 01:50:20.442647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.574  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:05.574 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:05.574 01:50:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.141 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:06.141 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.141 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.141 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.141 [2024-07-25 01:50:21.285579] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:06.141 [2024-07-25 01:50:21.285897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74978 ] 00:07:06.141 { 00:07:06.141 "subsystems": [ 00:07:06.141 { 00:07:06.141 "subsystem": "bdev", 00:07:06.141 "config": [ 00:07:06.141 { 00:07:06.141 "params": { 00:07:06.141 "trtype": "pcie", 00:07:06.141 "traddr": "0000:00:10.0", 00:07:06.141 "name": "Nvme0" 00:07:06.141 }, 00:07:06.141 "method": "bdev_nvme_attach_controller" 00:07:06.141 }, 00:07:06.141 { 00:07:06.141 "method": "bdev_wait_for_examine" 00:07:06.141 } 00:07:06.141 ] 00:07:06.141 } 00:07:06.141 ] 00:07:06.141 } 00:07:06.141 [2024-07-25 01:50:21.406821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.141 [2024-07-25 01:50:21.424747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.401 [2024-07-25 01:50:21.462028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.401 [2024-07-25 01:50:21.494514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.659  Copying: 60/60 [kB] (average 58 MBps) 00:07:06.659 00:07:06.659 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:06.659 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:06.659 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.659 01:50:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.659 { 00:07:06.659 "subsystems": [ 00:07:06.659 { 00:07:06.659 "subsystem": "bdev", 00:07:06.659 "config": [ 00:07:06.659 { 00:07:06.659 "params": { 00:07:06.659 "trtype": "pcie", 00:07:06.659 "traddr": "0000:00:10.0", 00:07:06.659 "name": "Nvme0" 00:07:06.659 }, 00:07:06.659 "method": "bdev_nvme_attach_controller" 00:07:06.659 }, 00:07:06.659 { 00:07:06.659 "method": "bdev_wait_for_examine" 00:07:06.659 } 00:07:06.659 ] 00:07:06.659 } 00:07:06.659 ] 00:07:06.659 } 00:07:06.659 [2024-07-25 01:50:21.768450] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:06.659 [2024-07-25 01:50:21.768537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74997 ] 00:07:06.659 [2024-07-25 01:50:21.889214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.659 [2024-07-25 01:50:21.908443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.659 [2024-07-25 01:50:21.942227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.918 [2024-07-25 01:50:21.973027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.918  Copying: 60/60 [kB] (average 29 MBps) 00:07:06.918 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.918 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.177 [2024-07-25 01:50:22.246901] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:07.177 [2024-07-25 01:50:22.247214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:07:07.177 { 00:07:07.177 "subsystems": [ 00:07:07.177 { 00:07:07.177 "subsystem": "bdev", 00:07:07.177 "config": [ 00:07:07.177 { 00:07:07.177 "params": { 00:07:07.177 "trtype": "pcie", 00:07:07.177 "traddr": "0000:00:10.0", 00:07:07.177 "name": "Nvme0" 00:07:07.177 }, 00:07:07.177 "method": "bdev_nvme_attach_controller" 00:07:07.177 }, 00:07:07.177 { 00:07:07.177 "method": "bdev_wait_for_examine" 00:07:07.177 } 00:07:07.177 ] 00:07:07.177 } 00:07:07.177 ] 00:07:07.177 } 00:07:07.177 [2024-07-25 01:50:22.368471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.177 [2024-07-25 01:50:22.385311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.177 [2024-07-25 01:50:22.417636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.177 [2024-07-25 01:50:22.444904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.437  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:07.437 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:07.437 01:50:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.004 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:08.004 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:08.004 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.004 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.004 [2024-07-25 01:50:23.286063] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:08.004 [2024-07-25 01:50:23.286333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75026 ] 00:07:08.004 { 00:07:08.004 "subsystems": [ 00:07:08.004 { 00:07:08.004 "subsystem": "bdev", 00:07:08.004 "config": [ 00:07:08.004 { 00:07:08.004 "params": { 00:07:08.004 "trtype": "pcie", 00:07:08.004 "traddr": "0000:00:10.0", 00:07:08.004 "name": "Nvme0" 00:07:08.004 }, 00:07:08.004 "method": "bdev_nvme_attach_controller" 00:07:08.004 }, 00:07:08.004 { 00:07:08.004 "method": "bdev_wait_for_examine" 00:07:08.004 } 00:07:08.004 ] 00:07:08.004 } 00:07:08.004 ] 00:07:08.004 } 00:07:08.268 [2024-07-25 01:50:23.410658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.268 [2024-07-25 01:50:23.428545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.268 [2024-07-25 01:50:23.462364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.268 [2024-07-25 01:50:23.489919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.530  Copying: 56/56 [kB] (average 54 MBps) 00:07:08.530 00:07:08.530 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:08.530 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:08.530 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.530 01:50:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.530 [2024-07-25 01:50:23.757339] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:08.530 [2024-07-25 01:50:23.757420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75044 ] 00:07:08.530 { 00:07:08.530 "subsystems": [ 00:07:08.530 { 00:07:08.530 "subsystem": "bdev", 00:07:08.530 "config": [ 00:07:08.530 { 00:07:08.530 "params": { 00:07:08.530 "trtype": "pcie", 00:07:08.530 "traddr": "0000:00:10.0", 00:07:08.530 "name": "Nvme0" 00:07:08.530 }, 00:07:08.530 "method": "bdev_nvme_attach_controller" 00:07:08.530 }, 00:07:08.530 { 00:07:08.530 "method": "bdev_wait_for_examine" 00:07:08.530 } 00:07:08.530 ] 00:07:08.530 } 00:07:08.530 ] 00:07:08.530 } 00:07:08.788 [2024-07-25 01:50:23.872510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.788 [2024-07-25 01:50:23.886025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.788 [2024-07-25 01:50:23.919676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.788 [2024-07-25 01:50:23.946497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.046  Copying: 56/56 [kB] (average 54 MBps) 00:07:09.046 00:07:09.046 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.046 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.047 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.047 [2024-07-25 01:50:24.223964] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:09.047 [2024-07-25 01:50:24.224050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75055 ] 00:07:09.047 { 00:07:09.047 "subsystems": [ 00:07:09.047 { 00:07:09.047 "subsystem": "bdev", 00:07:09.047 "config": [ 00:07:09.047 { 00:07:09.047 "params": { 00:07:09.047 "trtype": "pcie", 00:07:09.047 "traddr": "0000:00:10.0", 00:07:09.047 "name": "Nvme0" 00:07:09.047 }, 00:07:09.047 "method": "bdev_nvme_attach_controller" 00:07:09.047 }, 00:07:09.047 { 00:07:09.047 "method": "bdev_wait_for_examine" 00:07:09.047 } 00:07:09.047 ] 00:07:09.047 } 00:07:09.047 ] 00:07:09.047 } 00:07:09.047 [2024-07-25 01:50:24.344581] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.305 [2024-07-25 01:50:24.361357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.305 [2024-07-25 01:50:24.394565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.305 [2024-07-25 01:50:24.421828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.564  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:09.564 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:09.564 01:50:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:10.131 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:10.131 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.131 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 [2024-07-25 01:50:25.226914] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:10.131 [2024-07-25 01:50:25.227166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75074 ] 00:07:10.131 { 00:07:10.131 "subsystems": [ 00:07:10.131 { 00:07:10.131 "subsystem": "bdev", 00:07:10.131 "config": [ 00:07:10.131 { 00:07:10.131 "params": { 00:07:10.131 "trtype": "pcie", 00:07:10.131 "traddr": "0000:00:10.0", 00:07:10.131 "name": "Nvme0" 00:07:10.131 }, 00:07:10.131 "method": "bdev_nvme_attach_controller" 00:07:10.131 }, 00:07:10.131 { 00:07:10.131 "method": "bdev_wait_for_examine" 00:07:10.131 } 00:07:10.131 ] 00:07:10.131 } 00:07:10.131 ] 00:07:10.131 } 00:07:10.132 [2024-07-25 01:50:25.342564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.132 [2024-07-25 01:50:25.360703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.132 [2024-07-25 01:50:25.391339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.132 [2024-07-25 01:50:25.418304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.391  Copying: 56/56 [kB] (average 54 MBps) 00:07:10.391 00:07:10.391 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:10.391 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:10.391 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.391 01:50:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.391 [2024-07-25 01:50:25.667403] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:10.391 [2024-07-25 01:50:25.667482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 00:07:10.391 { 00:07:10.391 "subsystems": [ 00:07:10.391 { 00:07:10.391 "subsystem": "bdev", 00:07:10.391 "config": [ 00:07:10.391 { 00:07:10.391 "params": { 00:07:10.391 "trtype": "pcie", 00:07:10.391 "traddr": "0000:00:10.0", 00:07:10.391 "name": "Nvme0" 00:07:10.391 }, 00:07:10.391 "method": "bdev_nvme_attach_controller" 00:07:10.391 }, 00:07:10.391 { 00:07:10.391 "method": "bdev_wait_for_examine" 00:07:10.391 } 00:07:10.391 ] 00:07:10.391 } 00:07:10.391 ] 00:07:10.391 } 00:07:10.649 [2024-07-25 01:50:25.788771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.649 [2024-07-25 01:50:25.803229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.649 [2024-07-25 01:50:25.836942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.649 [2024-07-25 01:50:25.863180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.908  Copying: 56/56 [kB] (average 54 MBps) 00:07:10.908 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.908 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.908 [2024-07-25 01:50:26.124167] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:10.908 [2024-07-25 01:50:26.124257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75103 ] 00:07:10.908 { 00:07:10.908 "subsystems": [ 00:07:10.908 { 00:07:10.908 "subsystem": "bdev", 00:07:10.908 "config": [ 00:07:10.908 { 00:07:10.908 "params": { 00:07:10.908 "trtype": "pcie", 00:07:10.908 "traddr": "0000:00:10.0", 00:07:10.908 "name": "Nvme0" 00:07:10.908 }, 00:07:10.908 "method": "bdev_nvme_attach_controller" 00:07:10.908 }, 00:07:10.908 { 00:07:10.908 "method": "bdev_wait_for_examine" 00:07:10.908 } 00:07:10.908 ] 00:07:10.908 } 00:07:10.908 ] 00:07:10.908 } 00:07:11.167 [2024-07-25 01:50:26.245619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.167 [2024-07-25 01:50:26.263777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.167 [2024-07-25 01:50:26.294654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.167 [2024-07-25 01:50:26.320965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.425  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:11.425 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:11.425 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.684 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:11.942 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.942 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.942 01:50:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.942 [2024-07-25 01:50:27.033049] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:11.942 [2024-07-25 01:50:27.033326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75122 ] 00:07:11.942 { 00:07:11.942 "subsystems": [ 00:07:11.942 { 00:07:11.942 "subsystem": "bdev", 00:07:11.942 "config": [ 00:07:11.942 { 00:07:11.942 "params": { 00:07:11.942 "trtype": "pcie", 00:07:11.942 "traddr": "0000:00:10.0", 00:07:11.942 "name": "Nvme0" 00:07:11.942 }, 00:07:11.942 "method": "bdev_nvme_attach_controller" 00:07:11.942 }, 00:07:11.942 { 00:07:11.942 "method": "bdev_wait_for_examine" 00:07:11.942 } 00:07:11.942 ] 00:07:11.942 } 00:07:11.942 ] 00:07:11.942 } 00:07:11.942 [2024-07-25 01:50:27.154233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.942 [2024-07-25 01:50:27.172448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.942 [2024-07-25 01:50:27.208885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.942 [2024-07-25 01:50:27.236886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.200  Copying: 48/48 [kB] (average 46 MBps) 00:07:12.200 00:07:12.200 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:12.200 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:12.200 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.200 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.200 [2024-07-25 01:50:27.492660] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:12.200 [2024-07-25 01:50:27.492746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75130 ] 00:07:12.200 { 00:07:12.200 "subsystems": [ 00:07:12.200 { 00:07:12.200 "subsystem": "bdev", 00:07:12.200 "config": [ 00:07:12.200 { 00:07:12.200 "params": { 00:07:12.200 "trtype": "pcie", 00:07:12.200 "traddr": "0000:00:10.0", 00:07:12.200 "name": "Nvme0" 00:07:12.200 }, 00:07:12.200 "method": "bdev_nvme_attach_controller" 00:07:12.200 }, 00:07:12.200 { 00:07:12.200 "method": "bdev_wait_for_examine" 00:07:12.200 } 00:07:12.200 ] 00:07:12.200 } 00:07:12.200 ] 00:07:12.200 } 00:07:12.460 [2024-07-25 01:50:27.613675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.460 [2024-07-25 01:50:27.631887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.460 [2024-07-25 01:50:27.662831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.460 [2024-07-25 01:50:27.691628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.718  Copying: 48/48 [kB] (average 46 MBps) 00:07:12.718 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.719 01:50:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.719 [2024-07-25 01:50:27.959593] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:12.719 [2024-07-25 01:50:27.959692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75151 ] 00:07:12.719 { 00:07:12.719 "subsystems": [ 00:07:12.719 { 00:07:12.719 "subsystem": "bdev", 00:07:12.719 "config": [ 00:07:12.719 { 00:07:12.719 "params": { 00:07:12.719 "trtype": "pcie", 00:07:12.719 "traddr": "0000:00:10.0", 00:07:12.719 "name": "Nvme0" 00:07:12.719 }, 00:07:12.719 "method": "bdev_nvme_attach_controller" 00:07:12.719 }, 00:07:12.719 { 00:07:12.719 "method": "bdev_wait_for_examine" 00:07:12.719 } 00:07:12.719 ] 00:07:12.719 } 00:07:12.719 ] 00:07:12.719 } 00:07:12.977 [2024-07-25 01:50:28.080211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.977 [2024-07-25 01:50:28.096725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.977 [2024-07-25 01:50:28.127724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.977 [2024-07-25 01:50:28.153988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.235  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:13.235 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:13.235 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.494 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:13.494 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:13.494 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.494 01:50:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.752 [2024-07-25 01:50:28.808492] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:13.752 [2024-07-25 01:50:28.808765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75165 ] 00:07:13.752 { 00:07:13.752 "subsystems": [ 00:07:13.752 { 00:07:13.752 "subsystem": "bdev", 00:07:13.752 "config": [ 00:07:13.752 { 00:07:13.752 "params": { 00:07:13.752 "trtype": "pcie", 00:07:13.752 "traddr": "0000:00:10.0", 00:07:13.752 "name": "Nvme0" 00:07:13.752 }, 00:07:13.752 "method": "bdev_nvme_attach_controller" 00:07:13.752 }, 00:07:13.752 { 00:07:13.752 "method": "bdev_wait_for_examine" 00:07:13.752 } 00:07:13.752 ] 00:07:13.752 } 00:07:13.752 ] 00:07:13.752 } 00:07:13.752 [2024-07-25 01:50:28.929573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.752 [2024-07-25 01:50:28.948430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.752 [2024-07-25 01:50:28.980466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.752 [2024-07-25 01:50:29.007542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.011  Copying: 48/48 [kB] (average 46 MBps) 00:07:14.011 00:07:14.011 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:14.011 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:14.011 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.011 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.011 [2024-07-25 01:50:29.263074] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:14.011 [2024-07-25 01:50:29.263158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:07:14.011 { 00:07:14.011 "subsystems": [ 00:07:14.011 { 00:07:14.011 "subsystem": "bdev", 00:07:14.011 "config": [ 00:07:14.011 { 00:07:14.011 "params": { 00:07:14.011 "trtype": "pcie", 00:07:14.011 "traddr": "0000:00:10.0", 00:07:14.011 "name": "Nvme0" 00:07:14.011 }, 00:07:14.011 "method": "bdev_nvme_attach_controller" 00:07:14.011 }, 00:07:14.011 { 00:07:14.011 "method": "bdev_wait_for_examine" 00:07:14.011 } 00:07:14.011 ] 00:07:14.011 } 00:07:14.011 ] 00:07:14.011 } 00:07:14.270 [2024-07-25 01:50:29.384260] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.270 [2024-07-25 01:50:29.400945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.270 [2024-07-25 01:50:29.432146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.270 [2024-07-25 01:50:29.459252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.528  Copying: 48/48 [kB] (average 46 MBps) 00:07:14.528 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.528 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.529 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.529 01:50:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.529 [2024-07-25 01:50:29.738539] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:14.529 [2024-07-25 01:50:29.738788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75188 ] 00:07:14.529 { 00:07:14.529 "subsystems": [ 00:07:14.529 { 00:07:14.529 "subsystem": "bdev", 00:07:14.529 "config": [ 00:07:14.529 { 00:07:14.529 "params": { 00:07:14.529 "trtype": "pcie", 00:07:14.529 "traddr": "0000:00:10.0", 00:07:14.529 "name": "Nvme0" 00:07:14.529 }, 00:07:14.529 "method": "bdev_nvme_attach_controller" 00:07:14.529 }, 00:07:14.529 { 00:07:14.529 "method": "bdev_wait_for_examine" 00:07:14.529 } 00:07:14.529 ] 00:07:14.529 } 00:07:14.529 ] 00:07:14.529 } 00:07:14.787 [2024-07-25 01:50:29.860009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.787 [2024-07-25 01:50:29.875658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.787 [2024-07-25 01:50:29.906368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.787 [2024-07-25 01:50:29.932843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.046  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.046 00:07:15.046 00:07:15.046 real 0m11.497s 00:07:15.046 user 0m8.539s 00:07:15.046 sys 0m3.450s 00:07:15.046 ************************************ 00:07:15.046 END TEST dd_rw 00:07:15.046 ************************************ 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.046 ************************************ 00:07:15.046 START TEST dd_rw_offset 00:07:15.046 ************************************ 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:15.046 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:15.047 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=secni5mikyybeh14lufhmmp0magxd03ehwxmstprd38c1o5vk01w61u6atltwt6vjz0vvmdsasl8amgtvzoaykisj3rbupypxdd85oo7u6m9lgk4pxs4sdoc7a6bcmhimvchx9kd90c3jukfosdo0cwvj72lmca5yzlg83kmzbjiao66jix24tb5cjp3wr7jg4gylfyd80lrcz0kqlszs6jl8or8k8zj9wlbfbsz8n2tyodsl7n02hhv2ygxwczjru9kjgvivvqyh3u6ui83cgf36h1vomy8uqrprdpiexphpyz9x7ilikdl70upnx662h32flbz7gvt3h6mflcxef3vdzbk0muh3v360bfscdbs70qsomhff0snikaz3oddy6biprd8bkwe5jtaya6uujf51am71aecnoq6j3h868zaf40wpzkbg9rpo73smxbnnn78oitdrsvgp78ejvv91zvhnuppon5bo504i3wkz9gbeutm2950icnt09jzj5qtcf81579s706qrdj03lb8r5yqxyb3v81ors98y1wij2pokeqlji0es5k4yetijycsqhgve9sden3z97elji4vh601u7pp3bjruo38g743d7cegd127sxzplt2q8f8lug11f7jy86wysdmw7pnev0g8xme6h6r4vflcrq1ggun8jriypganng1m65bohh0gjkt8wk8rk9tz1cxj3w8aog4lod880rpa3eiiv07e31j0r4p35jwmr0x3rpukmxilvlxsx7f0etwahv9o6i8ihs9ubkwswiedpmvwx2pvvkbxvqye0de5mhpkbx359fz304aoezpp22hr2sc8ndv7nk1orzxmewg4j9k0bnehhlquad2ehoa3wyfxh1pammkhqzzxwb01fiend38o5elqg473sm4pmfsuk7u3xtm1qom2807rpe0zz66hey9behvnxxavqosc14qrblfxxaapedwn8a6f764twyehq0ahhk66uqi2tcckv4oof1ptsh8eto5y316j8j8oa0kfqg1ha0hf2jjcc4p75g6hoyhdjlena97i218v0vj79owgv9knfk16dngez0gbgjgzm4q7m1pyo3joxva8qc0p9awesrheoxdizk3hjfypbjgjtu740hcgw76bgpf480qihu32o4yys859n58331qabvkla05hkf8imfi7bwxo3e7v8l6e4ca3r6k93jaawaq401y92b3x1aeuztsgxrv2uulwbt2bmv0ufk2ijla0noc307q17opxm03a8deup5953ax595hu55uuit33rrj937mwmb5qs131gz4natlynw61mrjf8q5u4y7yvx5qvb6hj01ecyd8nzs6pt809k7tpx8nwtucd7m0saqcvl3mhg9p8sjukbt50mmkmesp40vmz9ku8rbaruwzlj85l9hk0kxmgd9i9u0bfon26e3kxa4mp1l8upbntdg52zfrsjgo5kaq661nkapzz0f4ykag780mttj12ihxeqffif44rokgc2j58tb31g06a4lcdys4mhvhe40r2xzf7rgws8fnbyf0efn9261g5wqhinsqou4bd8kkyssrj2sa8r4l8irxa40tw36ql1qh18uexe8ouqe2cvgnzemmzd7mb2ljitlhojpk5godwiu9f4a59npvjnzwt1vuqt6ifmvrhmpgn3p9wlyuz5i09m8myhieazxzwce2f8qvpvwgy77tcowb3ekmxrx9j0mg1woubuvsar70ti8i8du1xu29z6oq254vantvd8163x0a0ci8m50uy8usxyfgsowu2k6mz46u3ouo3m40upnv1ynwz114r885tykj1ad73abo5lyue9eidq6c5nihxo8k4zg95v4x1wml6ohfutpgueilm8tl3nyee81cbcwk9uor07u77n6lbbmenk27iyri012xe7h36dy436oi6q4ulq51uzyfl6slbqgtzeveyyemfhwjs5rj6cklwubyv5ddiugyrmh06ks93xblyxrs1ohmaa14fu7inplgb17jjju80exws5lon7oy177gsfzxfjcx5ooij6eiedb6syna90ctnl8buaj1dkjbzhh6o0nquaty76kuf2lb7d2mg3dsbyiuhldd8ouphazsck0nknk59f4n05gp1rqxgq1ff68ju1etmm1licwwtgkhabm1ovbctrcggj82cn9q3rc0a16ny4wdm2h4afjfs37euxcdzrsimawtin3f03by6j5w464nb7lp2ghs9a010xlc3ju959v3kl97vnh4h049a43wz44ohl40nzw5w0r8vsb1t8gxyeqwg99pd80uy3eh9xoimtrbp2empokmsolxza74wkqsorlfb5q0c8ouim3j4gq0tubcktnm3hljxhur42aje3e69wamfyybrlcj0bcap3uhkvblspogqgok8jxo8p1kat4rdhdk43y3zjerejof6xxj35ku4zxzwn8my4728g224oqdb4rghdniv0vyttvy6izp7lav2o40wjscw2l1rzcnj4opbq4e9poseh3mk7agjdv8dj73ayxavkzyx0jlg8af347e4w2yadns3i4o0crd5jmuc71n82lo16rok7b8eqqgaymt7cdi8fhmenwxlit3kg41ya6v4sbako7m8mabd2llkk9xuunjo78bmprk9wuhfot7q76ev2n2bkn7fip8kybdmifekm0mmb38whqmc98zx9s5ilpv2csvva4ql6wy7ybqa5crvqur45lghdhoxd3i7u2lxrw46jntyzdesmtlmc9dxptfohfskw30q3lt83jw43j2j1egibmista3zu8cofmyde5wieqcz2j44cvptci7uj9fizlhe853twr97cexuwp7881r7wq1hibwwhty57as9ma8c33pa0c527cgcep6wq77y2ro5hlra9kth3zfy8x4n0dzvd0kkaw0umm4t8obo84qygk8hf0cmiwxq68bhravgrnjy80s7txxtq1eno5l5jyvl1bsm1ijjqcm4qmi0t0vn5fs2t1o36zl1j4gu91tp7awha427pjwidr619bduc1y82431gv9uvi8v3vl3yuklh0hh8mv4g49cq4qznmr9foehudo1vtwselzhrg206yeq863r65s8s6dbnzav2hotodadzk38eln5x941b9cr6cch11ff02ef8gh4131a4dzcn2b57yg34963h0k60khk8qyq3v2ixp27weg0txesaurqbbwreguocnefwlspxs2mcooydadj0o3s1hewk06pdc9n5hbnp16kazsk4v3gkah29jhzsg1p8vnt1p6kv0mbu6dcu3bge6jpsl9fky4t7pkzffjtkohzykivd63lb9xifi8fpsz6lcrjwa9v0zr61k3w8tg3gw238dj5ts1exhqab7jdk3u1gq0rksvz4xm9y08xwtn42wlm4oeb4qv3k32ad83jz9eip8n0h6fuff6ej2bsz1g2ggwxkhr378f3x6mlx9v1x10wpkuqn1bhjwl8vsdekeoe9uqw7n5kcw3jrisyqgsig08by2xygoksurp9cnsn0bi24qobgdwt7kz7gqjxsib5qw0lcb2m1f2440mu0du29jdmitmq2vdgct2brd2fyk779cwp05sim7jrl3k7t4cir9p44uvx8fs8kmmxddjvph1nmf5xdkzd98yu4fqag0j7kogymflzeb16pl25718slckkrap3arktex66pkk5c98b9km85xd5m9ql3yz7brluhezy9kd83b2xz13bfh9baz8sxda7kliko1v044va2glytxe7bzbkprwzy2p5qb3ggmyx4nuimgzpqzncjbxd5i1yiwntyq40i3fsxpmtga36cyowm0iflxiflyhoihb0wd082uflrkfntttclhoh06pibuchf8qpj0h4n3l4zu4hhzsqzr1ilzls790ask9o4at3qgykb9adegi12f5ak7inkw9tdax0viudmpgcbmidht53h0sqrjdvhy1tmn5xaut6j05g2xs2xvzm2v76im4j4v8f2qh9y4aew2l43p0axy 00:07:15.047 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:15.047 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:15.047 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:15.047 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:15.047 [2024-07-25 01:50:30.301558] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:15.047 [2024-07-25 01:50:30.301642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75224 ] 00:07:15.047 { 00:07:15.047 "subsystems": [ 00:07:15.047 { 00:07:15.047 "subsystem": "bdev", 00:07:15.047 "config": [ 00:07:15.047 { 00:07:15.047 "params": { 00:07:15.047 "trtype": "pcie", 00:07:15.047 "traddr": "0000:00:10.0", 00:07:15.047 "name": "Nvme0" 00:07:15.047 }, 00:07:15.047 "method": "bdev_nvme_attach_controller" 00:07:15.047 }, 00:07:15.047 { 00:07:15.047 "method": "bdev_wait_for_examine" 00:07:15.047 } 00:07:15.047 ] 00:07:15.047 } 00:07:15.047 ] 00:07:15.047 } 00:07:15.306 [2024-07-25 01:50:30.422490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.306 [2024-07-25 01:50:30.440124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.306 [2024-07-25 01:50:30.470922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.306 [2024-07-25 01:50:30.497261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.564  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:15.564 00:07:15.564 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:15.564 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:15.564 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:15.564 01:50:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:15.564 [2024-07-25 01:50:30.769556] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:15.564 [2024-07-25 01:50:30.769639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75232 ] 00:07:15.564 { 00:07:15.564 "subsystems": [ 00:07:15.564 { 00:07:15.564 "subsystem": "bdev", 00:07:15.564 "config": [ 00:07:15.564 { 00:07:15.564 "params": { 00:07:15.564 "trtype": "pcie", 00:07:15.564 "traddr": "0000:00:10.0", 00:07:15.564 "name": "Nvme0" 00:07:15.564 }, 00:07:15.564 "method": "bdev_nvme_attach_controller" 00:07:15.564 }, 00:07:15.564 { 00:07:15.564 "method": "bdev_wait_for_examine" 00:07:15.564 } 00:07:15.564 ] 00:07:15.564 } 00:07:15.564 ] 00:07:15.564 } 00:07:15.823 [2024-07-25 01:50:30.890578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.823 [2024-07-25 01:50:30.908091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.823 [2024-07-25 01:50:30.939184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.823 [2024-07-25 01:50:30.965922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.082  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:16.082 00:07:16.082 ************************************ 00:07:16.082 END TEST dd_rw_offset 00:07:16.082 ************************************ 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ secni5mikyybeh14lufhmmp0magxd03ehwxmstprd38c1o5vk01w61u6atltwt6vjz0vvmdsasl8amgtvzoaykisj3rbupypxdd85oo7u6m9lgk4pxs4sdoc7a6bcmhimvchx9kd90c3jukfosdo0cwvj72lmca5yzlg83kmzbjiao66jix24tb5cjp3wr7jg4gylfyd80lrcz0kqlszs6jl8or8k8zj9wlbfbsz8n2tyodsl7n02hhv2ygxwczjru9kjgvivvqyh3u6ui83cgf36h1vomy8uqrprdpiexphpyz9x7ilikdl70upnx662h32flbz7gvt3h6mflcxef3vdzbk0muh3v360bfscdbs70qsomhff0snikaz3oddy6biprd8bkwe5jtaya6uujf51am71aecnoq6j3h868zaf40wpzkbg9rpo73smxbnnn78oitdrsvgp78ejvv91zvhnuppon5bo504i3wkz9gbeutm2950icnt09jzj5qtcf81579s706qrdj03lb8r5yqxyb3v81ors98y1wij2pokeqlji0es5k4yetijycsqhgve9sden3z97elji4vh601u7pp3bjruo38g743d7cegd127sxzplt2q8f8lug11f7jy86wysdmw7pnev0g8xme6h6r4vflcrq1ggun8jriypganng1m65bohh0gjkt8wk8rk9tz1cxj3w8aog4lod880rpa3eiiv07e31j0r4p35jwmr0x3rpukmxilvlxsx7f0etwahv9o6i8ihs9ubkwswiedpmvwx2pvvkbxvqye0de5mhpkbx359fz304aoezpp22hr2sc8ndv7nk1orzxmewg4j9k0bnehhlquad2ehoa3wyfxh1pammkhqzzxwb01fiend38o5elqg473sm4pmfsuk7u3xtm1qom2807rpe0zz66hey9behvnxxavqosc14qrblfxxaapedwn8a6f764twyehq0ahhk66uqi2tcckv4oof1ptsh8eto5y316j8j8oa0kfqg1ha0hf2jjcc4p75g6hoyhdjlena97i218v0vj79owgv9knfk16dngez0gbgjgzm4q7m1pyo3joxva8qc0p9awesrheoxdizk3hjfypbjgjtu740hcgw76bgpf480qihu32o4yys859n58331qabvkla05hkf8imfi7bwxo3e7v8l6e4ca3r6k93jaawaq401y92b3x1aeuztsgxrv2uulwbt2bmv0ufk2ijla0noc307q17opxm03a8deup5953ax595hu55uuit33rrj937mwmb5qs131gz4natlynw61mrjf8q5u4y7yvx5qvb6hj01ecyd8nzs6pt809k7tpx8nwtucd7m0saqcvl3mhg9p8sjukbt50mmkmesp40vmz9ku8rbaruwzlj85l9hk0kxmgd9i9u0bfon26e3kxa4mp1l8upbntdg52zfrsjgo5kaq661nkapzz0f4ykag780mttj12ihxeqffif44rokgc2j58tb31g06a4lcdys4mhvhe40r2xzf7rgws8fnbyf0efn9261g5wqhinsqou4bd8kkyssrj2sa8r4l8irxa40tw36ql1qh18uexe8ouqe2cvgnzemmzd7mb2ljitlhojpk5godwiu9f4a59npvjnzwt1vuqt6ifmvrhmpgn3p9wlyuz5i09m8myhieazxzwce2f8qvpvwgy77tcowb3ekmxrx9j0mg1woubuvsar70ti8i8du1xu29z6oq254vantvd8163x0a0ci8m50uy8usxyfgsowu2k6mz46u3ouo3m40upnv1ynwz114r885tykj1ad73abo5lyue9eidq6c5nihxo8k4zg95v4x1wml6ohfutpgueilm8tl3nyee81cbcwk9uor07u77n6lbbmenk27iyri012xe7h36dy436oi6q4ulq51uzyfl6slbqgtzeveyyemfhwjs5rj6cklwubyv5ddiugyrmh06ks93xblyxrs1ohmaa14fu7inplgb17jjju80exws5lon7oy177gsfzxfjcx5ooij6eiedb6syna90ctnl8buaj1dkjbzhh6o0nquaty76kuf2lb7d2mg3dsbyiuhldd8ouphazsck0nknk59f4n05gp1rqxgq1ff68ju1etmm1licwwtgkhabm1ovbctrcggj82cn9q3rc0a16ny4wdm2h4afjfs37euxcdzrsimawtin3f03by6j5w464nb7lp2ghs9a010xlc3ju959v3kl97vnh4h049a43wz44ohl40nzw5w0r8vsb1t8gxyeqwg99pd80uy3eh9xoimtrbp2empokmsolxza74wkqsorlfb5q0c8ouim3j4gq0tubcktnm3hljxhur42aje3e69wamfyybrlcj0bcap3uhkvblspogqgok8jxo8p1kat4rdhdk43y3zjerejof6xxj35ku4zxzwn8my4728g224oqdb4rghdniv0vyttvy6izp7lav2o40wjscw2l1rzcnj4opbq4e9poseh3mk7agjdv8dj73ayxavkzyx0jlg8af347e4w2yadns3i4o0crd5jmuc71n82lo16rok7b8eqqgaymt7cdi8fhmenwxlit3kg41ya6v4sbako7m8mabd2llkk9xuunjo78bmprk9wuhfot7q76ev2n2bkn7fip8kybdmifekm0mmb38whqmc98zx9s5ilpv2csvva4ql6wy7ybqa5crvqur45lghdhoxd3i7u2lxrw46jntyzdesmtlmc9dxptfohfskw30q3lt83jw43j2j1egibmista3zu8cofmyde5wieqcz2j44cvptci7uj9fizlhe853twr97cexuwp7881r7wq1hibwwhty57as9ma8c33pa0c527cgcep6wq77y2ro5hlra9kth3zfy8x4n0dzvd0kkaw0umm4t8obo84qygk8hf0cmiwxq68bhravgrnjy80s7txxtq1eno5l5jyvl1bsm1ijjqcm4qmi0t0vn5fs2t1o36zl1j4gu91tp7awha427pjwidr619bduc1y82431gv9uvi8v3vl3yuklh0hh8mv4g49cq4qznmr9foehudo1vtwselzhrg206yeq863r65s8s6dbnzav2hotodadzk38eln5x941b9cr6cch11ff02ef8gh4131a4dzcn2b57yg34963h0k60khk8qyq3v2ixp27weg0txesaurqbbwreguocnefwlspxs2mcooydadj0o3s1hewk06pdc9n5hbnp16kazsk4v3gkah29jhzsg1p8vnt1p6kv0mbu6dcu3bge6jpsl9fky4t7pkzffjtkohzykivd63lb9xifi8fpsz6lcrjwa9v0zr61k3w8tg3gw238dj5ts1exhqab7jdk3u1gq0rksvz4xm9y08xwtn42wlm4oeb4qv3k32ad83jz9eip8n0h6fuff6ej2bsz1g2ggwxkhr378f3x6mlx9v1x10wpkuqn1bhjwl8vsdekeoe9uqw7n5kcw3jrisyqgsig08by2xygoksurp9cnsn0bi24qobgdwt7kz7gqjxsib5qw0lcb2m1f2440mu0du29jdmitmq2vdgct2brd2fyk779cwp05sim7jrl3k7t4cir9p44uvx8fs8kmmxddjvph1nmf5xdkzd98yu4fqag0j7kogymflzeb16pl25718slckkrap3arktex66pkk5c98b9km85xd5m9ql3yz7brluhezy9kd83b2xz13bfh9baz8sxda7kliko1v044va2glytxe7bzbkprwzy2p5qb3ggmyx4nuimgzpqzncjbxd5i1yiwntyq40i3fsxpmtga36cyowm0iflxiflyhoihb0wd082uflrkfntttclhoh06pibuchf8qpj0h4n3l4zu4hhzsqzr1ilzls790ask9o4at3qgykb9adegi12f5ak7inkw9tdax0viudmpgcbmidht53h0sqrjdvhy1tmn5xaut6j05g2xs2xvzm2v76im4j4v8f2qh9y4aew2l43p0axy == \s\e\c\n\i\5\m\i\k\y\y\b\e\h\1\4\l\u\f\h\m\m\p\0\m\a\g\x\d\0\3\e\h\w\x\m\s\t\p\r\d\3\8\c\1\o\5\v\k\0\1\w\6\1\u\6\a\t\l\t\w\t\6\v\j\z\0\v\v\m\d\s\a\s\l\8\a\m\g\t\v\z\o\a\y\k\i\s\j\3\r\b\u\p\y\p\x\d\d\8\5\o\o\7\u\6\m\9\l\g\k\4\p\x\s\4\s\d\o\c\7\a\6\b\c\m\h\i\m\v\c\h\x\9\k\d\9\0\c\3\j\u\k\f\o\s\d\o\0\c\w\v\j\7\2\l\m\c\a\5\y\z\l\g\8\3\k\m\z\b\j\i\a\o\6\6\j\i\x\2\4\t\b\5\c\j\p\3\w\r\7\j\g\4\g\y\l\f\y\d\8\0\l\r\c\z\0\k\q\l\s\z\s\6\j\l\8\o\r\8\k\8\z\j\9\w\l\b\f\b\s\z\8\n\2\t\y\o\d\s\l\7\n\0\2\h\h\v\2\y\g\x\w\c\z\j\r\u\9\k\j\g\v\i\v\v\q\y\h\3\u\6\u\i\8\3\c\g\f\3\6\h\1\v\o\m\y\8\u\q\r\p\r\d\p\i\e\x\p\h\p\y\z\9\x\7\i\l\i\k\d\l\7\0\u\p\n\x\6\6\2\h\3\2\f\l\b\z\7\g\v\t\3\h\6\m\f\l\c\x\e\f\3\v\d\z\b\k\0\m\u\h\3\v\3\6\0\b\f\s\c\d\b\s\7\0\q\s\o\m\h\f\f\0\s\n\i\k\a\z\3\o\d\d\y\6\b\i\p\r\d\8\b\k\w\e\5\j\t\a\y\a\6\u\u\j\f\5\1\a\m\7\1\a\e\c\n\o\q\6\j\3\h\8\6\8\z\a\f\4\0\w\p\z\k\b\g\9\r\p\o\7\3\s\m\x\b\n\n\n\7\8\o\i\t\d\r\s\v\g\p\7\8\e\j\v\v\9\1\z\v\h\n\u\p\p\o\n\5\b\o\5\0\4\i\3\w\k\z\9\g\b\e\u\t\m\2\9\5\0\i\c\n\t\0\9\j\z\j\5\q\t\c\f\8\1\5\7\9\s\7\0\6\q\r\d\j\0\3\l\b\8\r\5\y\q\x\y\b\3\v\8\1\o\r\s\9\8\y\1\w\i\j\2\p\o\k\e\q\l\j\i\0\e\s\5\k\4\y\e\t\i\j\y\c\s\q\h\g\v\e\9\s\d\e\n\3\z\9\7\e\l\j\i\4\v\h\6\0\1\u\7\p\p\3\b\j\r\u\o\3\8\g\7\4\3\d\7\c\e\g\d\1\2\7\s\x\z\p\l\t\2\q\8\f\8\l\u\g\1\1\f\7\j\y\8\6\w\y\s\d\m\w\7\p\n\e\v\0\g\8\x\m\e\6\h\6\r\4\v\f\l\c\r\q\1\g\g\u\n\8\j\r\i\y\p\g\a\n\n\g\1\m\6\5\b\o\h\h\0\g\j\k\t\8\w\k\8\r\k\9\t\z\1\c\x\j\3\w\8\a\o\g\4\l\o\d\8\8\0\r\p\a\3\e\i\i\v\0\7\e\3\1\j\0\r\4\p\3\5\j\w\m\r\0\x\3\r\p\u\k\m\x\i\l\v\l\x\s\x\7\f\0\e\t\w\a\h\v\9\o\6\i\8\i\h\s\9\u\b\k\w\s\w\i\e\d\p\m\v\w\x\2\p\v\v\k\b\x\v\q\y\e\0\d\e\5\m\h\p\k\b\x\3\5\9\f\z\3\0\4\a\o\e\z\p\p\2\2\h\r\2\s\c\8\n\d\v\7\n\k\1\o\r\z\x\m\e\w\g\4\j\9\k\0\b\n\e\h\h\l\q\u\a\d\2\e\h\o\a\3\w\y\f\x\h\1\p\a\m\m\k\h\q\z\z\x\w\b\0\1\f\i\e\n\d\3\8\o\5\e\l\q\g\4\7\3\s\m\4\p\m\f\s\u\k\7\u\3\x\t\m\1\q\o\m\2\8\0\7\r\p\e\0\z\z\6\6\h\e\y\9\b\e\h\v\n\x\x\a\v\q\o\s\c\1\4\q\r\b\l\f\x\x\a\a\p\e\d\w\n\8\a\6\f\7\6\4\t\w\y\e\h\q\0\a\h\h\k\6\6\u\q\i\2\t\c\c\k\v\4\o\o\f\1\p\t\s\h\8\e\t\o\5\y\3\1\6\j\8\j\8\o\a\0\k\f\q\g\1\h\a\0\h\f\2\j\j\c\c\4\p\7\5\g\6\h\o\y\h\d\j\l\e\n\a\9\7\i\2\1\8\v\0\v\j\7\9\o\w\g\v\9\k\n\f\k\1\6\d\n\g\e\z\0\g\b\g\j\g\z\m\4\q\7\m\1\p\y\o\3\j\o\x\v\a\8\q\c\0\p\9\a\w\e\s\r\h\e\o\x\d\i\z\k\3\h\j\f\y\p\b\j\g\j\t\u\7\4\0\h\c\g\w\7\6\b\g\p\f\4\8\0\q\i\h\u\3\2\o\4\y\y\s\8\5\9\n\5\8\3\3\1\q\a\b\v\k\l\a\0\5\h\k\f\8\i\m\f\i\7\b\w\x\o\3\e\7\v\8\l\6\e\4\c\a\3\r\6\k\9\3\j\a\a\w\a\q\4\0\1\y\9\2\b\3\x\1\a\e\u\z\t\s\g\x\r\v\2\u\u\l\w\b\t\2\b\m\v\0\u\f\k\2\i\j\l\a\0\n\o\c\3\0\7\q\1\7\o\p\x\m\0\3\a\8\d\e\u\p\5\9\5\3\a\x\5\9\5\h\u\5\5\u\u\i\t\3\3\r\r\j\9\3\7\m\w\m\b\5\q\s\1\3\1\g\z\4\n\a\t\l\y\n\w\6\1\m\r\j\f\8\q\5\u\4\y\7\y\v\x\5\q\v\b\6\h\j\0\1\e\c\y\d\8\n\z\s\6\p\t\8\0\9\k\7\t\p\x\8\n\w\t\u\c\d\7\m\0\s\a\q\c\v\l\3\m\h\g\9\p\8\s\j\u\k\b\t\5\0\m\m\k\m\e\s\p\4\0\v\m\z\9\k\u\8\r\b\a\r\u\w\z\l\j\8\5\l\9\h\k\0\k\x\m\g\d\9\i\9\u\0\b\f\o\n\2\6\e\3\k\x\a\4\m\p\1\l\8\u\p\b\n\t\d\g\5\2\z\f\r\s\j\g\o\5\k\a\q\6\6\1\n\k\a\p\z\z\0\f\4\y\k\a\g\7\8\0\m\t\t\j\1\2\i\h\x\e\q\f\f\i\f\4\4\r\o\k\g\c\2\j\5\8\t\b\3\1\g\0\6\a\4\l\c\d\y\s\4\m\h\v\h\e\4\0\r\2\x\z\f\7\r\g\w\s\8\f\n\b\y\f\0\e\f\n\9\2\6\1\g\5\w\q\h\i\n\s\q\o\u\4\b\d\8\k\k\y\s\s\r\j\2\s\a\8\r\4\l\8\i\r\x\a\4\0\t\w\3\6\q\l\1\q\h\1\8\u\e\x\e\8\o\u\q\e\2\c\v\g\n\z\e\m\m\z\d\7\m\b\2\l\j\i\t\l\h\o\j\p\k\5\g\o\d\w\i\u\9\f\4\a\5\9\n\p\v\j\n\z\w\t\1\v\u\q\t\6\i\f\m\v\r\h\m\p\g\n\3\p\9\w\l\y\u\z\5\i\0\9\m\8\m\y\h\i\e\a\z\x\z\w\c\e\2\f\8\q\v\p\v\w\g\y\7\7\t\c\o\w\b\3\e\k\m\x\r\x\9\j\0\m\g\1\w\o\u\b\u\v\s\a\r\7\0\t\i\8\i\8\d\u\1\x\u\2\9\z\6\o\q\2\5\4\v\a\n\t\v\d\8\1\6\3\x\0\a\0\c\i\8\m\5\0\u\y\8\u\s\x\y\f\g\s\o\w\u\2\k\6\m\z\4\6\u\3\o\u\o\3\m\4\0\u\p\n\v\1\y\n\w\z\1\1\4\r\8\8\5\t\y\k\j\1\a\d\7\3\a\b\o\5\l\y\u\e\9\e\i\d\q\6\c\5\n\i\h\x\o\8\k\4\z\g\9\5\v\4\x\1\w\m\l\6\o\h\f\u\t\p\g\u\e\i\l\m\8\t\l\3\n\y\e\e\8\1\c\b\c\w\k\9\u\o\r\0\7\u\7\7\n\6\l\b\b\m\e\n\k\2\7\i\y\r\i\0\1\2\x\e\7\h\3\6\d\y\4\3\6\o\i\6\q\4\u\l\q\5\1\u\z\y\f\l\6\s\l\b\q\g\t\z\e\v\e\y\y\e\m\f\h\w\j\s\5\r\j\6\c\k\l\w\u\b\y\v\5\d\d\i\u\g\y\r\m\h\0\6\k\s\9\3\x\b\l\y\x\r\s\1\o\h\m\a\a\1\4\f\u\7\i\n\p\l\g\b\1\7\j\j\j\u\8\0\e\x\w\s\5\l\o\n\7\o\y\1\7\7\g\s\f\z\x\f\j\c\x\5\o\o\i\j\6\e\i\e\d\b\6\s\y\n\a\9\0\c\t\n\l\8\b\u\a\j\1\d\k\j\b\z\h\h\6\o\0\n\q\u\a\t\y\7\6\k\u\f\2\l\b\7\d\2\m\g\3\d\s\b\y\i\u\h\l\d\d\8\o\u\p\h\a\z\s\c\k\0\n\k\n\k\5\9\f\4\n\0\5\g\p\1\r\q\x\g\q\1\f\f\6\8\j\u\1\e\t\m\m\1\l\i\c\w\w\t\g\k\h\a\b\m\1\o\v\b\c\t\r\c\g\g\j\8\2\c\n\9\q\3\r\c\0\a\1\6\n\y\4\w\d\m\2\h\4\a\f\j\f\s\3\7\e\u\x\c\d\z\r\s\i\m\a\w\t\i\n\3\f\0\3\b\y\6\j\5\w\4\6\4\n\b\7\l\p\2\g\h\s\9\a\0\1\0\x\l\c\3\j\u\9\5\9\v\3\k\l\9\7\v\n\h\4\h\0\4\9\a\4\3\w\z\4\4\o\h\l\4\0\n\z\w\5\w\0\r\8\v\s\b\1\t\8\g\x\y\e\q\w\g\9\9\p\d\8\0\u\y\3\e\h\9\x\o\i\m\t\r\b\p\2\e\m\p\o\k\m\s\o\l\x\z\a\7\4\w\k\q\s\o\r\l\f\b\5\q\0\c\8\o\u\i\m\3\j\4\g\q\0\t\u\b\c\k\t\n\m\3\h\l\j\x\h\u\r\4\2\a\j\e\3\e\6\9\w\a\m\f\y\y\b\r\l\c\j\0\b\c\a\p\3\u\h\k\v\b\l\s\p\o\g\q\g\o\k\8\j\x\o\8\p\1\k\a\t\4\r\d\h\d\k\4\3\y\3\z\j\e\r\e\j\o\f\6\x\x\j\3\5\k\u\4\z\x\z\w\n\8\m\y\4\7\2\8\g\2\2\4\o\q\d\b\4\r\g\h\d\n\i\v\0\v\y\t\t\v\y\6\i\z\p\7\l\a\v\2\o\4\0\w\j\s\c\w\2\l\1\r\z\c\n\j\4\o\p\b\q\4\e\9\p\o\s\e\h\3\m\k\7\a\g\j\d\v\8\d\j\7\3\a\y\x\a\v\k\z\y\x\0\j\l\g\8\a\f\3\4\7\e\4\w\2\y\a\d\n\s\3\i\4\o\0\c\r\d\5\j\m\u\c\7\1\n\8\2\l\o\1\6\r\o\k\7\b\8\e\q\q\g\a\y\m\t\7\c\d\i\8\f\h\m\e\n\w\x\l\i\t\3\k\g\4\1\y\a\6\v\4\s\b\a\k\o\7\m\8\m\a\b\d\2\l\l\k\k\9\x\u\u\n\j\o\7\8\b\m\p\r\k\9\w\u\h\f\o\t\7\q\7\6\e\v\2\n\2\b\k\n\7\f\i\p\8\k\y\b\d\m\i\f\e\k\m\0\m\m\b\3\8\w\h\q\m\c\9\8\z\x\9\s\5\i\l\p\v\2\c\s\v\v\a\4\q\l\6\w\y\7\y\b\q\a\5\c\r\v\q\u\r\4\5\l\g\h\d\h\o\x\d\3\i\7\u\2\l\x\r\w\4\6\j\n\t\y\z\d\e\s\m\t\l\m\c\9\d\x\p\t\f\o\h\f\s\k\w\3\0\q\3\l\t\8\3\j\w\4\3\j\2\j\1\e\g\i\b\m\i\s\t\a\3\z\u\8\c\o\f\m\y\d\e\5\w\i\e\q\c\z\2\j\4\4\c\v\p\t\c\i\7\u\j\9\f\i\z\l\h\e\8\5\3\t\w\r\9\7\c\e\x\u\w\p\7\8\8\1\r\7\w\q\1\h\i\b\w\w\h\t\y\5\7\a\s\9\m\a\8\c\3\3\p\a\0\c\5\2\7\c\g\c\e\p\6\w\q\7\7\y\2\r\o\5\h\l\r\a\9\k\t\h\3\z\f\y\8\x\4\n\0\d\z\v\d\0\k\k\a\w\0\u\m\m\4\t\8\o\b\o\8\4\q\y\g\k\8\h\f\0\c\m\i\w\x\q\6\8\b\h\r\a\v\g\r\n\j\y\8\0\s\7\t\x\x\t\q\1\e\n\o\5\l\5\j\y\v\l\1\b\s\m\1\i\j\j\q\c\m\4\q\m\i\0\t\0\v\n\5\f\s\2\t\1\o\3\6\z\l\1\j\4\g\u\9\1\t\p\7\a\w\h\a\4\2\7\p\j\w\i\d\r\6\1\9\b\d\u\c\1\y\8\2\4\3\1\g\v\9\u\v\i\8\v\3\v\l\3\y\u\k\l\h\0\h\h\8\m\v\4\g\4\9\c\q\4\q\z\n\m\r\9\f\o\e\h\u\d\o\1\v\t\w\s\e\l\z\h\r\g\2\0\6\y\e\q\8\6\3\r\6\5\s\8\s\6\d\b\n\z\a\v\2\h\o\t\o\d\a\d\z\k\3\8\e\l\n\5\x\9\4\1\b\9\c\r\6\c\c\h\1\1\f\f\0\2\e\f\8\g\h\4\1\3\1\a\4\d\z\c\n\2\b\5\7\y\g\3\4\9\6\3\h\0\k\6\0\k\h\k\8\q\y\q\3\v\2\i\x\p\2\7\w\e\g\0\t\x\e\s\a\u\r\q\b\b\w\r\e\g\u\o\c\n\e\f\w\l\s\p\x\s\2\m\c\o\o\y\d\a\d\j\0\o\3\s\1\h\e\w\k\0\6\p\d\c\9\n\5\h\b\n\p\1\6\k\a\z\s\k\4\v\3\g\k\a\h\2\9\j\h\z\s\g\1\p\8\v\n\t\1\p\6\k\v\0\m\b\u\6\d\c\u\3\b\g\e\6\j\p\s\l\9\f\k\y\4\t\7\p\k\z\f\f\j\t\k\o\h\z\y\k\i\v\d\6\3\l\b\9\x\i\f\i\8\f\p\s\z\6\l\c\r\j\w\a\9\v\0\z\r\6\1\k\3\w\8\t\g\3\g\w\2\3\8\d\j\5\t\s\1\e\x\h\q\a\b\7\j\d\k\3\u\1\g\q\0\r\k\s\v\z\4\x\m\9\y\0\8\x\w\t\n\4\2\w\l\m\4\o\e\b\4\q\v\3\k\3\2\a\d\8\3\j\z\9\e\i\p\8\n\0\h\6\f\u\f\f\6\e\j\2\b\s\z\1\g\2\g\g\w\x\k\h\r\3\7\8\f\3\x\6\m\l\x\9\v\1\x\1\0\w\p\k\u\q\n\1\b\h\j\w\l\8\v\s\d\e\k\e\o\e\9\u\q\w\7\n\5\k\c\w\3\j\r\i\s\y\q\g\s\i\g\0\8\b\y\2\x\y\g\o\k\s\u\r\p\9\c\n\s\n\0\b\i\2\4\q\o\b\g\d\w\t\7\k\z\7\g\q\j\x\s\i\b\5\q\w\0\l\c\b\2\m\1\f\2\4\4\0\m\u\0\d\u\2\9\j\d\m\i\t\m\q\2\v\d\g\c\t\2\b\r\d\2\f\y\k\7\7\9\c\w\p\0\5\s\i\m\7\j\r\l\3\k\7\t\4\c\i\r\9\p\4\4\u\v\x\8\f\s\8\k\m\m\x\d\d\j\v\p\h\1\n\m\f\5\x\d\k\z\d\9\8\y\u\4\f\q\a\g\0\j\7\k\o\g\y\m\f\l\z\e\b\1\6\p\l\2\5\7\1\8\s\l\c\k\k\r\a\p\3\a\r\k\t\e\x\6\6\p\k\k\5\c\9\8\b\9\k\m\8\5\x\d\5\m\9\q\l\3\y\z\7\b\r\l\u\h\e\z\y\9\k\d\8\3\b\2\x\z\1\3\b\f\h\9\b\a\z\8\s\x\d\a\7\k\l\i\k\o\1\v\0\4\4\v\a\2\g\l\y\t\x\e\7\b\z\b\k\p\r\w\z\y\2\p\5\q\b\3\g\g\m\y\x\4\n\u\i\m\g\z\p\q\z\n\c\j\b\x\d\5\i\1\y\i\w\n\t\y\q\4\0\i\3\f\s\x\p\m\t\g\a\3\6\c\y\o\w\m\0\i\f\l\x\i\f\l\y\h\o\i\h\b\0\w\d\0\8\2\u\f\l\r\k\f\n\t\t\t\c\l\h\o\h\0\6\p\i\b\u\c\h\f\8\q\p\j\0\h\4\n\3\l\4\z\u\4\h\h\z\s\q\z\r\1\i\l\z\l\s\7\9\0\a\s\k\9\o\4\a\t\3\q\g\y\k\b\9\a\d\e\g\i\1\2\f\5\a\k\7\i\n\k\w\9\t\d\a\x\0\v\i\u\d\m\p\g\c\b\m\i\d\h\t\5\3\h\0\s\q\r\j\d\v\h\y\1\t\m\n\5\x\a\u\t\6\j\0\5\g\2\x\s\2\x\v\z\m\2\v\7\6\i\m\4\j\4\v\8\f\2\q\h\9\y\4\a\e\w\2\l\4\3\p\0\a\x\y ]] 00:07:16.082 00:07:16.082 real 0m0.975s 00:07:16.082 user 0m0.692s 00:07:16.082 sys 0m0.341s 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:16.082 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:16.083 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:16.083 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:16.083 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:16.083 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.083 01:50:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.083 [2024-07-25 01:50:31.271042] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:16.083 [2024-07-25 01:50:31.271127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75267 ] 00:07:16.083 { 00:07:16.083 "subsystems": [ 00:07:16.083 { 00:07:16.083 "subsystem": "bdev", 00:07:16.083 "config": [ 00:07:16.083 { 00:07:16.083 "params": { 00:07:16.083 "trtype": "pcie", 00:07:16.083 "traddr": "0000:00:10.0", 00:07:16.083 "name": "Nvme0" 00:07:16.083 }, 00:07:16.083 "method": "bdev_nvme_attach_controller" 00:07:16.083 }, 00:07:16.083 { 00:07:16.083 "method": "bdev_wait_for_examine" 00:07:16.083 } 00:07:16.083 ] 00:07:16.083 } 00:07:16.083 ] 00:07:16.083 } 00:07:16.342 [2024-07-25 01:50:31.392134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.342 [2024-07-25 01:50:31.406550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.342 [2024-07-25 01:50:31.438423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.342 [2024-07-25 01:50:31.465124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.601  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:16.601 00:07:16.601 01:50:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.601 ************************************ 00:07:16.601 END TEST spdk_dd_basic_rw 00:07:16.601 ************************************ 00:07:16.601 00:07:16.601 real 0m13.911s 00:07:16.601 user 0m10.070s 00:07:16.601 sys 0m4.258s 00:07:16.601 01:50:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.601 01:50:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.601 01:50:31 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:16.601 01:50:31 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.601 01:50:31 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.601 01:50:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:16.601 ************************************ 00:07:16.601 START TEST spdk_dd_posix 00:07:16.601 ************************************ 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:16.601 * Looking for test storage... 00:07:16.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:16.601 * First test run, liburing in use 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:16.601 ************************************ 00:07:16.601 START TEST dd_flag_append 00:07:16.601 ************************************ 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=a4vehf8l57bdohbdmk9p33jymvfp672i 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ti3h4opv8slcksl4upgzjkz9j387mj3d 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s a4vehf8l57bdohbdmk9p33jymvfp672i 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ti3h4opv8slcksl4upgzjkz9j387mj3d 00:07:16.601 01:50:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:16.601 [2024-07-25 01:50:31.880649] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:16.601 [2024-07-25 01:50:31.880724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75326 ] 00:07:16.861 [2024-07-25 01:50:31.996503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.862 [2024-07-25 01:50:32.015013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.862 [2024-07-25 01:50:32.046940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.862 [2024-07-25 01:50:32.075291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.136  Copying: 32/32 [B] (average 31 kBps) 00:07:17.136 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ti3h4opv8slcksl4upgzjkz9j387mj3da4vehf8l57bdohbdmk9p33jymvfp672i == \t\i\3\h\4\o\p\v\8\s\l\c\k\s\l\4\u\p\g\z\j\k\z\9\j\3\8\7\m\j\3\d\a\4\v\e\h\f\8\l\5\7\b\d\o\h\b\d\m\k\9\p\3\3\j\y\m\v\f\p\6\7\2\i ]] 00:07:17.136 00:07:17.136 real 0m0.380s 00:07:17.136 user 0m0.186s 00:07:17.136 sys 0m0.154s 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.136 ************************************ 00:07:17.136 END TEST dd_flag_append 00:07:17.136 ************************************ 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 ************************************ 00:07:17.136 START TEST dd_flag_directory 00:07:17.136 ************************************ 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.136 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.136 [2024-07-25 01:50:32.313982] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:17.136 [2024-07-25 01:50:32.314065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75354 ] 00:07:17.408 [2024-07-25 01:50:32.434296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.408 [2024-07-25 01:50:32.452957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.408 [2024-07-25 01:50:32.486193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.408 [2024-07-25 01:50:32.512549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.408 [2024-07-25 01:50:32.525900] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.408 [2024-07-25 01:50:32.525947] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.408 [2024-07-25 01:50:32.525976] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.408 [2024-07-25 01:50:32.579532] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.408 01:50:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.667 [2024-07-25 01:50:32.714403] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:17.667 [2024-07-25 01:50:32.714492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75358 ] 00:07:17.667 [2024-07-25 01:50:32.834762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.667 [2024-07-25 01:50:32.852793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.667 [2024-07-25 01:50:32.886330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.667 [2024-07-25 01:50:32.912482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.667 [2024-07-25 01:50:32.925814] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.667 [2024-07-25 01:50:32.925917] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.667 [2024-07-25 01:50:32.925945] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.925 [2024-07-25 01:50:32.981306] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.925 00:07:17.925 real 0m0.795s 00:07:17.925 user 0m0.403s 00:07:17.925 sys 0m0.185s 00:07:17.925 ************************************ 00:07:17.925 END TEST dd_flag_directory 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 ************************************ 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.925 01:50:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 ************************************ 00:07:17.926 START TEST dd_flag_nofollow 00:07:17.926 ************************************ 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.926 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.926 [2024-07-25 01:50:33.172565] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:17.926 [2024-07-25 01:50:33.172666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75391 ] 00:07:18.184 [2024-07-25 01:50:33.292922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.184 [2024-07-25 01:50:33.311584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.184 [2024-07-25 01:50:33.342332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.184 [2024-07-25 01:50:33.368015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.184 [2024-07-25 01:50:33.380971] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:18.184 [2024-07-25 01:50:33.381037] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:18.184 [2024-07-25 01:50:33.381065] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.184 [2024-07-25 01:50:33.433999] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.443 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:18.443 [2024-07-25 01:50:33.574085] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:18.443 [2024-07-25 01:50:33.574177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75396 ] 00:07:18.443 [2024-07-25 01:50:33.694350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.443 [2024-07-25 01:50:33.710796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.702 [2024-07-25 01:50:33.742478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.702 [2024-07-25 01:50:33.768664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.702 [2024-07-25 01:50:33.781700] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:18.702 [2024-07-25 01:50:33.781766] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:18.702 [2024-07-25 01:50:33.781796] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.702 [2024-07-25 01:50:33.839074] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:18.702 01:50:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.702 [2024-07-25 01:50:33.982285] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:18.702 [2024-07-25 01:50:33.982391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75404 ] 00:07:18.961 [2024-07-25 01:50:34.102686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.961 [2024-07-25 01:50:34.120060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.961 [2024-07-25 01:50:34.151203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.961 [2024-07-25 01:50:34.180421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.221  Copying: 512/512 [B] (average 500 kBps) 00:07:19.221 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ nfogykd1t0kcez00zro7yo6ym6g9udlqouh9ln1k4tndhno694hvfoubmayoiyrx3cnfgfvqhpqi7cg7wmudw86ftrc51i1a9lh6fk64esq4g8aed6osmzm669ha1keplffhk5h8lj44ss6vo7260lgi725e8awwqrn1vf6qxdn1by6ks8xxuqrsyhoncuitbmh508q0kllnr0ym6lig1l0n0zljlgdkysf98qj86qng2yu7ty35nw3qs8mt108yi7qmc1c4vsv7kzzmnbk7fdc8j9ubepnfxgcxwlr9xpui9rf6ookh0syyrjarjqq6irl4y1e41sv83e32nf5d5ff5ehx4pr56eljkq07sugpsfaejjsip9ya7lizl1q0bh7yao7a1wzwe9cetlwsqmucemumnqhx14wq3myjpta09pjboxhmvagcefj6cy3bzggtvpyt5jtxn1zj6j4zwl1xug9cnng3twkemlipl08j1p7uuc268e57wjmvptr0e == \n\f\o\g\y\k\d\1\t\0\k\c\e\z\0\0\z\r\o\7\y\o\6\y\m\6\g\9\u\d\l\q\o\u\h\9\l\n\1\k\4\t\n\d\h\n\o\6\9\4\h\v\f\o\u\b\m\a\y\o\i\y\r\x\3\c\n\f\g\f\v\q\h\p\q\i\7\c\g\7\w\m\u\d\w\8\6\f\t\r\c\5\1\i\1\a\9\l\h\6\f\k\6\4\e\s\q\4\g\8\a\e\d\6\o\s\m\z\m\6\6\9\h\a\1\k\e\p\l\f\f\h\k\5\h\8\l\j\4\4\s\s\6\v\o\7\2\6\0\l\g\i\7\2\5\e\8\a\w\w\q\r\n\1\v\f\6\q\x\d\n\1\b\y\6\k\s\8\x\x\u\q\r\s\y\h\o\n\c\u\i\t\b\m\h\5\0\8\q\0\k\l\l\n\r\0\y\m\6\l\i\g\1\l\0\n\0\z\l\j\l\g\d\k\y\s\f\9\8\q\j\8\6\q\n\g\2\y\u\7\t\y\3\5\n\w\3\q\s\8\m\t\1\0\8\y\i\7\q\m\c\1\c\4\v\s\v\7\k\z\z\m\n\b\k\7\f\d\c\8\j\9\u\b\e\p\n\f\x\g\c\x\w\l\r\9\x\p\u\i\9\r\f\6\o\o\k\h\0\s\y\y\r\j\a\r\j\q\q\6\i\r\l\4\y\1\e\4\1\s\v\8\3\e\3\2\n\f\5\d\5\f\f\5\e\h\x\4\p\r\5\6\e\l\j\k\q\0\7\s\u\g\p\s\f\a\e\j\j\s\i\p\9\y\a\7\l\i\z\l\1\q\0\b\h\7\y\a\o\7\a\1\w\z\w\e\9\c\e\t\l\w\s\q\m\u\c\e\m\u\m\n\q\h\x\1\4\w\q\3\m\y\j\p\t\a\0\9\p\j\b\o\x\h\m\v\a\g\c\e\f\j\6\c\y\3\b\z\g\g\t\v\p\y\t\5\j\t\x\n\1\z\j\6\j\4\z\w\l\1\x\u\g\9\c\n\n\g\3\t\w\k\e\m\l\i\p\l\0\8\j\1\p\7\u\u\c\2\6\8\e\5\7\w\j\m\v\p\t\r\0\e ]] 00:07:19.221 00:07:19.221 real 0m1.217s 00:07:19.221 user 0m0.643s 00:07:19.221 sys 0m0.326s 00:07:19.221 ************************************ 00:07:19.221 END TEST dd_flag_nofollow 00:07:19.221 ************************************ 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 ************************************ 00:07:19.221 START TEST dd_flag_noatime 00:07:19.221 ************************************ 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721872234 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721872234 00:07:19.221 01:50:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:20.158 01:50:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.158 [2024-07-25 01:50:35.454731] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:20.158 [2024-07-25 01:50:35.454856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75446 ] 00:07:20.417 [2024-07-25 01:50:35.575484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.417 [2024-07-25 01:50:35.595826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.417 [2024-07-25 01:50:35.637113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.417 [2024-07-25 01:50:35.668491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.676  Copying: 512/512 [B] (average 500 kBps) 00:07:20.676 00:07:20.676 01:50:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.676 01:50:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721872234 )) 00:07:20.676 01:50:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.676 01:50:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721872234 )) 00:07:20.676 01:50:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.676 [2024-07-25 01:50:35.860915] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:20.676 [2024-07-25 01:50:35.860997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75454 ] 00:07:20.934 [2024-07-25 01:50:35.981358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.934 [2024-07-25 01:50:36.001011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.934 [2024-07-25 01:50:36.032300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.934 [2024-07-25 01:50:36.058143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.934  Copying: 512/512 [B] (average 500 kBps) 00:07:20.934 00:07:20.934 01:50:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.934 01:50:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721872236 )) 00:07:20.934 00:07:20.934 real 0m1.827s 00:07:20.934 user 0m0.404s 00:07:20.934 sys 0m0.379s 00:07:20.934 ************************************ 00:07:20.934 END TEST dd_flag_noatime 00:07:20.934 ************************************ 00:07:20.934 01:50:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.934 01:50:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.193 ************************************ 00:07:21.193 START TEST dd_flags_misc 00:07:21.193 ************************************ 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.193 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:21.193 [2024-07-25 01:50:36.317972] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:21.193 [2024-07-25 01:50:36.318072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75487 ] 00:07:21.193 [2024-07-25 01:50:36.438268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.193 [2024-07-25 01:50:36.456333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.193 [2024-07-25 01:50:36.487361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.453 [2024-07-25 01:50:36.514125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.453  Copying: 512/512 [B] (average 500 kBps) 00:07:21.453 00:07:21.453 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n6u8pt9g4bgdit980mpay424spltepvfg2wt03nlcpjw3ex0p4uxs61v50ujd3f04qbinmw5xh2c7hep1n9jotarttwllcqcqbpvlh1pcqaxgss7fi7p4add7d9kf1e9bwk98iac1sg7ylxesubloh91g8s7yf7uck3uq7xbsaxxdicork6zphrhdppoi7ccioi4jv71201ui21qumv0du9042iocj3y8wyfp2pifse7pegwx87c9kr0fu9o0ugrvuw5bwh9rr48ts6iq4g2oek8ag1jvynnu0r3b8zu7zpcufcu4sngsz9jl2461pcw5faspp59zrn7oykdw2tmnc701fm3epqigzw2478mzp6qd6y65nn4ydw8cg7f4ed1p99jyfn4d1mtuat6tbstht8cmmw1bjnolugquwu8qwy6mzeqzp2jgx3iyrj2n6ruatp9x9g5g7ca38n8g5wbpw1ooybdhjgpq9uceg6ip4nblp8gy7xhmmb5hr8otbtl == \n\6\u\8\p\t\9\g\4\b\g\d\i\t\9\8\0\m\p\a\y\4\2\4\s\p\l\t\e\p\v\f\g\2\w\t\0\3\n\l\c\p\j\w\3\e\x\0\p\4\u\x\s\6\1\v\5\0\u\j\d\3\f\0\4\q\b\i\n\m\w\5\x\h\2\c\7\h\e\p\1\n\9\j\o\t\a\r\t\t\w\l\l\c\q\c\q\b\p\v\l\h\1\p\c\q\a\x\g\s\s\7\f\i\7\p\4\a\d\d\7\d\9\k\f\1\e\9\b\w\k\9\8\i\a\c\1\s\g\7\y\l\x\e\s\u\b\l\o\h\9\1\g\8\s\7\y\f\7\u\c\k\3\u\q\7\x\b\s\a\x\x\d\i\c\o\r\k\6\z\p\h\r\h\d\p\p\o\i\7\c\c\i\o\i\4\j\v\7\1\2\0\1\u\i\2\1\q\u\m\v\0\d\u\9\0\4\2\i\o\c\j\3\y\8\w\y\f\p\2\p\i\f\s\e\7\p\e\g\w\x\8\7\c\9\k\r\0\f\u\9\o\0\u\g\r\v\u\w\5\b\w\h\9\r\r\4\8\t\s\6\i\q\4\g\2\o\e\k\8\a\g\1\j\v\y\n\n\u\0\r\3\b\8\z\u\7\z\p\c\u\f\c\u\4\s\n\g\s\z\9\j\l\2\4\6\1\p\c\w\5\f\a\s\p\p\5\9\z\r\n\7\o\y\k\d\w\2\t\m\n\c\7\0\1\f\m\3\e\p\q\i\g\z\w\2\4\7\8\m\z\p\6\q\d\6\y\6\5\n\n\4\y\d\w\8\c\g\7\f\4\e\d\1\p\9\9\j\y\f\n\4\d\1\m\t\u\a\t\6\t\b\s\t\h\t\8\c\m\m\w\1\b\j\n\o\l\u\g\q\u\w\u\8\q\w\y\6\m\z\e\q\z\p\2\j\g\x\3\i\y\r\j\2\n\6\r\u\a\t\p\9\x\9\g\5\g\7\c\a\3\8\n\8\g\5\w\b\p\w\1\o\o\y\b\d\h\j\g\p\q\9\u\c\e\g\6\i\p\4\n\b\l\p\8\g\y\7\x\h\m\m\b\5\h\r\8\o\t\b\t\l ]] 00:07:21.453 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.453 01:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:21.453 [2024-07-25 01:50:36.714490] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:21.453 [2024-07-25 01:50:36.714586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75492 ] 00:07:21.712 [2024-07-25 01:50:36.834306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.712 [2024-07-25 01:50:36.848628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.712 [2024-07-25 01:50:36.883615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.712 [2024-07-25 01:50:36.910039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.971  Copying: 512/512 [B] (average 500 kBps) 00:07:21.971 00:07:21.971 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n6u8pt9g4bgdit980mpay424spltepvfg2wt03nlcpjw3ex0p4uxs61v50ujd3f04qbinmw5xh2c7hep1n9jotarttwllcqcqbpvlh1pcqaxgss7fi7p4add7d9kf1e9bwk98iac1sg7ylxesubloh91g8s7yf7uck3uq7xbsaxxdicork6zphrhdppoi7ccioi4jv71201ui21qumv0du9042iocj3y8wyfp2pifse7pegwx87c9kr0fu9o0ugrvuw5bwh9rr48ts6iq4g2oek8ag1jvynnu0r3b8zu7zpcufcu4sngsz9jl2461pcw5faspp59zrn7oykdw2tmnc701fm3epqigzw2478mzp6qd6y65nn4ydw8cg7f4ed1p99jyfn4d1mtuat6tbstht8cmmw1bjnolugquwu8qwy6mzeqzp2jgx3iyrj2n6ruatp9x9g5g7ca38n8g5wbpw1ooybdhjgpq9uceg6ip4nblp8gy7xhmmb5hr8otbtl == \n\6\u\8\p\t\9\g\4\b\g\d\i\t\9\8\0\m\p\a\y\4\2\4\s\p\l\t\e\p\v\f\g\2\w\t\0\3\n\l\c\p\j\w\3\e\x\0\p\4\u\x\s\6\1\v\5\0\u\j\d\3\f\0\4\q\b\i\n\m\w\5\x\h\2\c\7\h\e\p\1\n\9\j\o\t\a\r\t\t\w\l\l\c\q\c\q\b\p\v\l\h\1\p\c\q\a\x\g\s\s\7\f\i\7\p\4\a\d\d\7\d\9\k\f\1\e\9\b\w\k\9\8\i\a\c\1\s\g\7\y\l\x\e\s\u\b\l\o\h\9\1\g\8\s\7\y\f\7\u\c\k\3\u\q\7\x\b\s\a\x\x\d\i\c\o\r\k\6\z\p\h\r\h\d\p\p\o\i\7\c\c\i\o\i\4\j\v\7\1\2\0\1\u\i\2\1\q\u\m\v\0\d\u\9\0\4\2\i\o\c\j\3\y\8\w\y\f\p\2\p\i\f\s\e\7\p\e\g\w\x\8\7\c\9\k\r\0\f\u\9\o\0\u\g\r\v\u\w\5\b\w\h\9\r\r\4\8\t\s\6\i\q\4\g\2\o\e\k\8\a\g\1\j\v\y\n\n\u\0\r\3\b\8\z\u\7\z\p\c\u\f\c\u\4\s\n\g\s\z\9\j\l\2\4\6\1\p\c\w\5\f\a\s\p\p\5\9\z\r\n\7\o\y\k\d\w\2\t\m\n\c\7\0\1\f\m\3\e\p\q\i\g\z\w\2\4\7\8\m\z\p\6\q\d\6\y\6\5\n\n\4\y\d\w\8\c\g\7\f\4\e\d\1\p\9\9\j\y\f\n\4\d\1\m\t\u\a\t\6\t\b\s\t\h\t\8\c\m\m\w\1\b\j\n\o\l\u\g\q\u\w\u\8\q\w\y\6\m\z\e\q\z\p\2\j\g\x\3\i\y\r\j\2\n\6\r\u\a\t\p\9\x\9\g\5\g\7\c\a\3\8\n\8\g\5\w\b\p\w\1\o\o\y\b\d\h\j\g\p\q\9\u\c\e\g\6\i\p\4\n\b\l\p\8\g\y\7\x\h\m\m\b\5\h\r\8\o\t\b\t\l ]] 00:07:21.971 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.971 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:21.971 [2024-07-25 01:50:37.110907] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:21.971 [2024-07-25 01:50:37.111015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75502 ] 00:07:21.971 [2024-07-25 01:50:37.230966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.971 [2024-07-25 01:50:37.246801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.230 [2024-07-25 01:50:37.278802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.230 [2024-07-25 01:50:37.306558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.230  Copying: 512/512 [B] (average 125 kBps) 00:07:22.230 00:07:22.230 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n6u8pt9g4bgdit980mpay424spltepvfg2wt03nlcpjw3ex0p4uxs61v50ujd3f04qbinmw5xh2c7hep1n9jotarttwllcqcqbpvlh1pcqaxgss7fi7p4add7d9kf1e9bwk98iac1sg7ylxesubloh91g8s7yf7uck3uq7xbsaxxdicork6zphrhdppoi7ccioi4jv71201ui21qumv0du9042iocj3y8wyfp2pifse7pegwx87c9kr0fu9o0ugrvuw5bwh9rr48ts6iq4g2oek8ag1jvynnu0r3b8zu7zpcufcu4sngsz9jl2461pcw5faspp59zrn7oykdw2tmnc701fm3epqigzw2478mzp6qd6y65nn4ydw8cg7f4ed1p99jyfn4d1mtuat6tbstht8cmmw1bjnolugquwu8qwy6mzeqzp2jgx3iyrj2n6ruatp9x9g5g7ca38n8g5wbpw1ooybdhjgpq9uceg6ip4nblp8gy7xhmmb5hr8otbtl == \n\6\u\8\p\t\9\g\4\b\g\d\i\t\9\8\0\m\p\a\y\4\2\4\s\p\l\t\e\p\v\f\g\2\w\t\0\3\n\l\c\p\j\w\3\e\x\0\p\4\u\x\s\6\1\v\5\0\u\j\d\3\f\0\4\q\b\i\n\m\w\5\x\h\2\c\7\h\e\p\1\n\9\j\o\t\a\r\t\t\w\l\l\c\q\c\q\b\p\v\l\h\1\p\c\q\a\x\g\s\s\7\f\i\7\p\4\a\d\d\7\d\9\k\f\1\e\9\b\w\k\9\8\i\a\c\1\s\g\7\y\l\x\e\s\u\b\l\o\h\9\1\g\8\s\7\y\f\7\u\c\k\3\u\q\7\x\b\s\a\x\x\d\i\c\o\r\k\6\z\p\h\r\h\d\p\p\o\i\7\c\c\i\o\i\4\j\v\7\1\2\0\1\u\i\2\1\q\u\m\v\0\d\u\9\0\4\2\i\o\c\j\3\y\8\w\y\f\p\2\p\i\f\s\e\7\p\e\g\w\x\8\7\c\9\k\r\0\f\u\9\o\0\u\g\r\v\u\w\5\b\w\h\9\r\r\4\8\t\s\6\i\q\4\g\2\o\e\k\8\a\g\1\j\v\y\n\n\u\0\r\3\b\8\z\u\7\z\p\c\u\f\c\u\4\s\n\g\s\z\9\j\l\2\4\6\1\p\c\w\5\f\a\s\p\p\5\9\z\r\n\7\o\y\k\d\w\2\t\m\n\c\7\0\1\f\m\3\e\p\q\i\g\z\w\2\4\7\8\m\z\p\6\q\d\6\y\6\5\n\n\4\y\d\w\8\c\g\7\f\4\e\d\1\p\9\9\j\y\f\n\4\d\1\m\t\u\a\t\6\t\b\s\t\h\t\8\c\m\m\w\1\b\j\n\o\l\u\g\q\u\w\u\8\q\w\y\6\m\z\e\q\z\p\2\j\g\x\3\i\y\r\j\2\n\6\r\u\a\t\p\9\x\9\g\5\g\7\c\a\3\8\n\8\g\5\w\b\p\w\1\o\o\y\b\d\h\j\g\p\q\9\u\c\e\g\6\i\p\4\n\b\l\p\8\g\y\7\x\h\m\m\b\5\h\r\8\o\t\b\t\l ]] 00:07:22.230 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.231 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:22.231 [2024-07-25 01:50:37.505619] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:22.231 [2024-07-25 01:50:37.505714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75511 ] 00:07:22.490 [2024-07-25 01:50:37.625686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.490 [2024-07-25 01:50:37.639716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.490 [2024-07-25 01:50:37.671868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.490 [2024-07-25 01:50:37.701305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.749  Copying: 512/512 [B] (average 500 kBps) 00:07:22.749 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n6u8pt9g4bgdit980mpay424spltepvfg2wt03nlcpjw3ex0p4uxs61v50ujd3f04qbinmw5xh2c7hep1n9jotarttwllcqcqbpvlh1pcqaxgss7fi7p4add7d9kf1e9bwk98iac1sg7ylxesubloh91g8s7yf7uck3uq7xbsaxxdicork6zphrhdppoi7ccioi4jv71201ui21qumv0du9042iocj3y8wyfp2pifse7pegwx87c9kr0fu9o0ugrvuw5bwh9rr48ts6iq4g2oek8ag1jvynnu0r3b8zu7zpcufcu4sngsz9jl2461pcw5faspp59zrn7oykdw2tmnc701fm3epqigzw2478mzp6qd6y65nn4ydw8cg7f4ed1p99jyfn4d1mtuat6tbstht8cmmw1bjnolugquwu8qwy6mzeqzp2jgx3iyrj2n6ruatp9x9g5g7ca38n8g5wbpw1ooybdhjgpq9uceg6ip4nblp8gy7xhmmb5hr8otbtl == \n\6\u\8\p\t\9\g\4\b\g\d\i\t\9\8\0\m\p\a\y\4\2\4\s\p\l\t\e\p\v\f\g\2\w\t\0\3\n\l\c\p\j\w\3\e\x\0\p\4\u\x\s\6\1\v\5\0\u\j\d\3\f\0\4\q\b\i\n\m\w\5\x\h\2\c\7\h\e\p\1\n\9\j\o\t\a\r\t\t\w\l\l\c\q\c\q\b\p\v\l\h\1\p\c\q\a\x\g\s\s\7\f\i\7\p\4\a\d\d\7\d\9\k\f\1\e\9\b\w\k\9\8\i\a\c\1\s\g\7\y\l\x\e\s\u\b\l\o\h\9\1\g\8\s\7\y\f\7\u\c\k\3\u\q\7\x\b\s\a\x\x\d\i\c\o\r\k\6\z\p\h\r\h\d\p\p\o\i\7\c\c\i\o\i\4\j\v\7\1\2\0\1\u\i\2\1\q\u\m\v\0\d\u\9\0\4\2\i\o\c\j\3\y\8\w\y\f\p\2\p\i\f\s\e\7\p\e\g\w\x\8\7\c\9\k\r\0\f\u\9\o\0\u\g\r\v\u\w\5\b\w\h\9\r\r\4\8\t\s\6\i\q\4\g\2\o\e\k\8\a\g\1\j\v\y\n\n\u\0\r\3\b\8\z\u\7\z\p\c\u\f\c\u\4\s\n\g\s\z\9\j\l\2\4\6\1\p\c\w\5\f\a\s\p\p\5\9\z\r\n\7\o\y\k\d\w\2\t\m\n\c\7\0\1\f\m\3\e\p\q\i\g\z\w\2\4\7\8\m\z\p\6\q\d\6\y\6\5\n\n\4\y\d\w\8\c\g\7\f\4\e\d\1\p\9\9\j\y\f\n\4\d\1\m\t\u\a\t\6\t\b\s\t\h\t\8\c\m\m\w\1\b\j\n\o\l\u\g\q\u\w\u\8\q\w\y\6\m\z\e\q\z\p\2\j\g\x\3\i\y\r\j\2\n\6\r\u\a\t\p\9\x\9\g\5\g\7\c\a\3\8\n\8\g\5\w\b\p\w\1\o\o\y\b\d\h\j\g\p\q\9\u\c\e\g\6\i\p\4\n\b\l\p\8\g\y\7\x\h\m\m\b\5\h\r\8\o\t\b\t\l ]] 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.749 01:50:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:22.749 [2024-07-25 01:50:37.896687] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:22.749 [2024-07-25 01:50:37.896783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:07:22.749 [2024-07-25 01:50:38.017136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.749 [2024-07-25 01:50:38.035947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.008 [2024-07-25 01:50:38.068015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.008 [2024-07-25 01:50:38.094302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.008  Copying: 512/512 [B] (average 500 kBps) 00:07:23.008 00:07:23.008 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tker93a40k272v0pdf30v8mtwfofu88aico8j44wywnshqboeba8pved87hfz16o2spw8oi6ry5ofs1rraei35vautyrm1eszlj2jppifz06071cjd0m1fk67fnc5hxlm743puqesdxpcq8rjvfdwk6zd0c3tv40drpebirkszl5iu7t42y3rkeaxsy7a1y0i9fa20m35rqm662witzvndcsen9er4a3a4966qx1zf0iyn5zjqqe4cx06eytcum915n4fy1qie11yfmjhhvu51g14qazr9w9setj87ioeir6ha2n68elzp85xyt2kpiwrv6mne5xtsbmudwdhf3qzyit0l273deqlimad1j541f52k2a8imru9by6aih0xcomxf7h8vmky97xvr1x7lz1ndylnoee2m3ufbu0mb5nhwmgn4uis1rcoo6o24vpzd0tkuh68ndloevm6i7zp34cz2qof4b3n59l70gshk9rv5f9u5bvzrw3i2i0y3l5rna == \t\k\e\r\9\3\a\4\0\k\2\7\2\v\0\p\d\f\3\0\v\8\m\t\w\f\o\f\u\8\8\a\i\c\o\8\j\4\4\w\y\w\n\s\h\q\b\o\e\b\a\8\p\v\e\d\8\7\h\f\z\1\6\o\2\s\p\w\8\o\i\6\r\y\5\o\f\s\1\r\r\a\e\i\3\5\v\a\u\t\y\r\m\1\e\s\z\l\j\2\j\p\p\i\f\z\0\6\0\7\1\c\j\d\0\m\1\f\k\6\7\f\n\c\5\h\x\l\m\7\4\3\p\u\q\e\s\d\x\p\c\q\8\r\j\v\f\d\w\k\6\z\d\0\c\3\t\v\4\0\d\r\p\e\b\i\r\k\s\z\l\5\i\u\7\t\4\2\y\3\r\k\e\a\x\s\y\7\a\1\y\0\i\9\f\a\2\0\m\3\5\r\q\m\6\6\2\w\i\t\z\v\n\d\c\s\e\n\9\e\r\4\a\3\a\4\9\6\6\q\x\1\z\f\0\i\y\n\5\z\j\q\q\e\4\c\x\0\6\e\y\t\c\u\m\9\1\5\n\4\f\y\1\q\i\e\1\1\y\f\m\j\h\h\v\u\5\1\g\1\4\q\a\z\r\9\w\9\s\e\t\j\8\7\i\o\e\i\r\6\h\a\2\n\6\8\e\l\z\p\8\5\x\y\t\2\k\p\i\w\r\v\6\m\n\e\5\x\t\s\b\m\u\d\w\d\h\f\3\q\z\y\i\t\0\l\2\7\3\d\e\q\l\i\m\a\d\1\j\5\4\1\f\5\2\k\2\a\8\i\m\r\u\9\b\y\6\a\i\h\0\x\c\o\m\x\f\7\h\8\v\m\k\y\9\7\x\v\r\1\x\7\l\z\1\n\d\y\l\n\o\e\e\2\m\3\u\f\b\u\0\m\b\5\n\h\w\m\g\n\4\u\i\s\1\r\c\o\o\6\o\2\4\v\p\z\d\0\t\k\u\h\6\8\n\d\l\o\e\v\m\6\i\7\z\p\3\4\c\z\2\q\o\f\4\b\3\n\5\9\l\7\0\g\s\h\k\9\r\v\5\f\9\u\5\b\v\z\r\w\3\i\2\i\0\y\3\l\5\r\n\a ]] 00:07:23.008 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.008 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:23.008 [2024-07-25 01:50:38.274671] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:23.008 [2024-07-25 01:50:38.274766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75525 ] 00:07:23.267 [2024-07-25 01:50:38.394657] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.267 [2024-07-25 01:50:38.410780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.267 [2024-07-25 01:50:38.441536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.267 [2024-07-25 01:50:38.467788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.526  Copying: 512/512 [B] (average 500 kBps) 00:07:23.526 00:07:23.526 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tker93a40k272v0pdf30v8mtwfofu88aico8j44wywnshqboeba8pved87hfz16o2spw8oi6ry5ofs1rraei35vautyrm1eszlj2jppifz06071cjd0m1fk67fnc5hxlm743puqesdxpcq8rjvfdwk6zd0c3tv40drpebirkszl5iu7t42y3rkeaxsy7a1y0i9fa20m35rqm662witzvndcsen9er4a3a4966qx1zf0iyn5zjqqe4cx06eytcum915n4fy1qie11yfmjhhvu51g14qazr9w9setj87ioeir6ha2n68elzp85xyt2kpiwrv6mne5xtsbmudwdhf3qzyit0l273deqlimad1j541f52k2a8imru9by6aih0xcomxf7h8vmky97xvr1x7lz1ndylnoee2m3ufbu0mb5nhwmgn4uis1rcoo6o24vpzd0tkuh68ndloevm6i7zp34cz2qof4b3n59l70gshk9rv5f9u5bvzrw3i2i0y3l5rna == \t\k\e\r\9\3\a\4\0\k\2\7\2\v\0\p\d\f\3\0\v\8\m\t\w\f\o\f\u\8\8\a\i\c\o\8\j\4\4\w\y\w\n\s\h\q\b\o\e\b\a\8\p\v\e\d\8\7\h\f\z\1\6\o\2\s\p\w\8\o\i\6\r\y\5\o\f\s\1\r\r\a\e\i\3\5\v\a\u\t\y\r\m\1\e\s\z\l\j\2\j\p\p\i\f\z\0\6\0\7\1\c\j\d\0\m\1\f\k\6\7\f\n\c\5\h\x\l\m\7\4\3\p\u\q\e\s\d\x\p\c\q\8\r\j\v\f\d\w\k\6\z\d\0\c\3\t\v\4\0\d\r\p\e\b\i\r\k\s\z\l\5\i\u\7\t\4\2\y\3\r\k\e\a\x\s\y\7\a\1\y\0\i\9\f\a\2\0\m\3\5\r\q\m\6\6\2\w\i\t\z\v\n\d\c\s\e\n\9\e\r\4\a\3\a\4\9\6\6\q\x\1\z\f\0\i\y\n\5\z\j\q\q\e\4\c\x\0\6\e\y\t\c\u\m\9\1\5\n\4\f\y\1\q\i\e\1\1\y\f\m\j\h\h\v\u\5\1\g\1\4\q\a\z\r\9\w\9\s\e\t\j\8\7\i\o\e\i\r\6\h\a\2\n\6\8\e\l\z\p\8\5\x\y\t\2\k\p\i\w\r\v\6\m\n\e\5\x\t\s\b\m\u\d\w\d\h\f\3\q\z\y\i\t\0\l\2\7\3\d\e\q\l\i\m\a\d\1\j\5\4\1\f\5\2\k\2\a\8\i\m\r\u\9\b\y\6\a\i\h\0\x\c\o\m\x\f\7\h\8\v\m\k\y\9\7\x\v\r\1\x\7\l\z\1\n\d\y\l\n\o\e\e\2\m\3\u\f\b\u\0\m\b\5\n\h\w\m\g\n\4\u\i\s\1\r\c\o\o\6\o\2\4\v\p\z\d\0\t\k\u\h\6\8\n\d\l\o\e\v\m\6\i\7\z\p\3\4\c\z\2\q\o\f\4\b\3\n\5\9\l\7\0\g\s\h\k\9\r\v\5\f\9\u\5\b\v\z\r\w\3\i\2\i\0\y\3\l\5\r\n\a ]] 00:07:23.526 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.526 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:23.526 [2024-07-25 01:50:38.646246] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:23.526 [2024-07-25 01:50:38.646346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75534 ] 00:07:23.526 [2024-07-25 01:50:38.766586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.526 [2024-07-25 01:50:38.784753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.526 [2024-07-25 01:50:38.815550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.784 [2024-07-25 01:50:38.842696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.784  Copying: 512/512 [B] (average 166 kBps) 00:07:23.784 00:07:23.785 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tker93a40k272v0pdf30v8mtwfofu88aico8j44wywnshqboeba8pved87hfz16o2spw8oi6ry5ofs1rraei35vautyrm1eszlj2jppifz06071cjd0m1fk67fnc5hxlm743puqesdxpcq8rjvfdwk6zd0c3tv40drpebirkszl5iu7t42y3rkeaxsy7a1y0i9fa20m35rqm662witzvndcsen9er4a3a4966qx1zf0iyn5zjqqe4cx06eytcum915n4fy1qie11yfmjhhvu51g14qazr9w9setj87ioeir6ha2n68elzp85xyt2kpiwrv6mne5xtsbmudwdhf3qzyit0l273deqlimad1j541f52k2a8imru9by6aih0xcomxf7h8vmky97xvr1x7lz1ndylnoee2m3ufbu0mb5nhwmgn4uis1rcoo6o24vpzd0tkuh68ndloevm6i7zp34cz2qof4b3n59l70gshk9rv5f9u5bvzrw3i2i0y3l5rna == \t\k\e\r\9\3\a\4\0\k\2\7\2\v\0\p\d\f\3\0\v\8\m\t\w\f\o\f\u\8\8\a\i\c\o\8\j\4\4\w\y\w\n\s\h\q\b\o\e\b\a\8\p\v\e\d\8\7\h\f\z\1\6\o\2\s\p\w\8\o\i\6\r\y\5\o\f\s\1\r\r\a\e\i\3\5\v\a\u\t\y\r\m\1\e\s\z\l\j\2\j\p\p\i\f\z\0\6\0\7\1\c\j\d\0\m\1\f\k\6\7\f\n\c\5\h\x\l\m\7\4\3\p\u\q\e\s\d\x\p\c\q\8\r\j\v\f\d\w\k\6\z\d\0\c\3\t\v\4\0\d\r\p\e\b\i\r\k\s\z\l\5\i\u\7\t\4\2\y\3\r\k\e\a\x\s\y\7\a\1\y\0\i\9\f\a\2\0\m\3\5\r\q\m\6\6\2\w\i\t\z\v\n\d\c\s\e\n\9\e\r\4\a\3\a\4\9\6\6\q\x\1\z\f\0\i\y\n\5\z\j\q\q\e\4\c\x\0\6\e\y\t\c\u\m\9\1\5\n\4\f\y\1\q\i\e\1\1\y\f\m\j\h\h\v\u\5\1\g\1\4\q\a\z\r\9\w\9\s\e\t\j\8\7\i\o\e\i\r\6\h\a\2\n\6\8\e\l\z\p\8\5\x\y\t\2\k\p\i\w\r\v\6\m\n\e\5\x\t\s\b\m\u\d\w\d\h\f\3\q\z\y\i\t\0\l\2\7\3\d\e\q\l\i\m\a\d\1\j\5\4\1\f\5\2\k\2\a\8\i\m\r\u\9\b\y\6\a\i\h\0\x\c\o\m\x\f\7\h\8\v\m\k\y\9\7\x\v\r\1\x\7\l\z\1\n\d\y\l\n\o\e\e\2\m\3\u\f\b\u\0\m\b\5\n\h\w\m\g\n\4\u\i\s\1\r\c\o\o\6\o\2\4\v\p\z\d\0\t\k\u\h\6\8\n\d\l\o\e\v\m\6\i\7\z\p\3\4\c\z\2\q\o\f\4\b\3\n\5\9\l\7\0\g\s\h\k\9\r\v\5\f\9\u\5\b\v\z\r\w\3\i\2\i\0\y\3\l\5\r\n\a ]] 00:07:23.785 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.785 01:50:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:23.785 [2024-07-25 01:50:39.027692] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:23.785 [2024-07-25 01:50:39.027790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75544 ] 00:07:24.045 [2024-07-25 01:50:39.147830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.045 [2024-07-25 01:50:39.164420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.045 [2024-07-25 01:50:39.195132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.045 [2024-07-25 01:50:39.220934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.304  Copying: 512/512 [B] (average 500 kBps) 00:07:24.304 00:07:24.304 ************************************ 00:07:24.304 END TEST dd_flags_misc 00:07:24.304 ************************************ 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tker93a40k272v0pdf30v8mtwfofu88aico8j44wywnshqboeba8pved87hfz16o2spw8oi6ry5ofs1rraei35vautyrm1eszlj2jppifz06071cjd0m1fk67fnc5hxlm743puqesdxpcq8rjvfdwk6zd0c3tv40drpebirkszl5iu7t42y3rkeaxsy7a1y0i9fa20m35rqm662witzvndcsen9er4a3a4966qx1zf0iyn5zjqqe4cx06eytcum915n4fy1qie11yfmjhhvu51g14qazr9w9setj87ioeir6ha2n68elzp85xyt2kpiwrv6mne5xtsbmudwdhf3qzyit0l273deqlimad1j541f52k2a8imru9by6aih0xcomxf7h8vmky97xvr1x7lz1ndylnoee2m3ufbu0mb5nhwmgn4uis1rcoo6o24vpzd0tkuh68ndloevm6i7zp34cz2qof4b3n59l70gshk9rv5f9u5bvzrw3i2i0y3l5rna == \t\k\e\r\9\3\a\4\0\k\2\7\2\v\0\p\d\f\3\0\v\8\m\t\w\f\o\f\u\8\8\a\i\c\o\8\j\4\4\w\y\w\n\s\h\q\b\o\e\b\a\8\p\v\e\d\8\7\h\f\z\1\6\o\2\s\p\w\8\o\i\6\r\y\5\o\f\s\1\r\r\a\e\i\3\5\v\a\u\t\y\r\m\1\e\s\z\l\j\2\j\p\p\i\f\z\0\6\0\7\1\c\j\d\0\m\1\f\k\6\7\f\n\c\5\h\x\l\m\7\4\3\p\u\q\e\s\d\x\p\c\q\8\r\j\v\f\d\w\k\6\z\d\0\c\3\t\v\4\0\d\r\p\e\b\i\r\k\s\z\l\5\i\u\7\t\4\2\y\3\r\k\e\a\x\s\y\7\a\1\y\0\i\9\f\a\2\0\m\3\5\r\q\m\6\6\2\w\i\t\z\v\n\d\c\s\e\n\9\e\r\4\a\3\a\4\9\6\6\q\x\1\z\f\0\i\y\n\5\z\j\q\q\e\4\c\x\0\6\e\y\t\c\u\m\9\1\5\n\4\f\y\1\q\i\e\1\1\y\f\m\j\h\h\v\u\5\1\g\1\4\q\a\z\r\9\w\9\s\e\t\j\8\7\i\o\e\i\r\6\h\a\2\n\6\8\e\l\z\p\8\5\x\y\t\2\k\p\i\w\r\v\6\m\n\e\5\x\t\s\b\m\u\d\w\d\h\f\3\q\z\y\i\t\0\l\2\7\3\d\e\q\l\i\m\a\d\1\j\5\4\1\f\5\2\k\2\a\8\i\m\r\u\9\b\y\6\a\i\h\0\x\c\o\m\x\f\7\h\8\v\m\k\y\9\7\x\v\r\1\x\7\l\z\1\n\d\y\l\n\o\e\e\2\m\3\u\f\b\u\0\m\b\5\n\h\w\m\g\n\4\u\i\s\1\r\c\o\o\6\o\2\4\v\p\z\d\0\t\k\u\h\6\8\n\d\l\o\e\v\m\6\i\7\z\p\3\4\c\z\2\q\o\f\4\b\3\n\5\9\l\7\0\g\s\h\k\9\r\v\5\f\9\u\5\b\v\z\r\w\3\i\2\i\0\y\3\l\5\r\n\a ]] 00:07:24.304 00:07:24.304 real 0m3.106s 00:07:24.304 user 0m1.547s 00:07:24.304 sys 0m1.257s 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:24.304 * Second test run, disabling liburing, forcing AIO 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:24.304 01:50:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.305 ************************************ 00:07:24.305 START TEST dd_flag_append_forced_aio 00:07:24.305 ************************************ 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=qf5qwn7r9vuoaayn1x7s4m6zaovg0d17 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=p67oai8a1uuyhaxi3esvj3sp5b32egxa 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s qf5qwn7r9vuoaayn1x7s4m6zaovg0d17 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s p67oai8a1uuyhaxi3esvj3sp5b32egxa 00:07:24.305 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:24.305 [2024-07-25 01:50:39.473503] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:24.305 [2024-07-25 01:50:39.473612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75572 ] 00:07:24.305 [2024-07-25 01:50:39.587712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.305 [2024-07-25 01:50:39.601743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.564 [2024-07-25 01:50:39.634090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.564 [2024-07-25 01:50:39.660259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.564  Copying: 32/32 [B] (average 31 kBps) 00:07:24.564 00:07:24.564 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ p67oai8a1uuyhaxi3esvj3sp5b32egxaqf5qwn7r9vuoaayn1x7s4m6zaovg0d17 == \p\6\7\o\a\i\8\a\1\u\u\y\h\a\x\i\3\e\s\v\j\3\s\p\5\b\3\2\e\g\x\a\q\f\5\q\w\n\7\r\9\v\u\o\a\a\y\n\1\x\7\s\4\m\6\z\a\o\v\g\0\d\1\7 ]] 00:07:24.564 00:07:24.564 real 0m0.400s 00:07:24.564 user 0m0.197s 00:07:24.564 sys 0m0.084s 00:07:24.564 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.564 ************************************ 00:07:24.564 END TEST dd_flag_append_forced_aio 00:07:24.564 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:24.564 ************************************ 00:07:24.822 01:50:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:24.822 01:50:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.822 01:50:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.822 01:50:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.822 ************************************ 00:07:24.822 START TEST dd_flag_directory_forced_aio 00:07:24.822 ************************************ 00:07:24.822 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.823 01:50:39 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.823 [2024-07-25 01:50:39.927333] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:24.823 [2024-07-25 01:50:39.927439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75593 ] 00:07:24.823 [2024-07-25 01:50:40.048704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.823 [2024-07-25 01:50:40.067509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.823 [2024-07-25 01:50:40.098611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.082 [2024-07-25 01:50:40.125750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.082 [2024-07-25 01:50:40.139230] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:25.082 [2024-07-25 01:50:40.139291] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:25.082 [2024-07-25 01:50:40.139320] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.082 [2024-07-25 01:50:40.193638] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.082 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:25.082 [2024-07-25 01:50:40.324463] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:25.082 [2024-07-25 01:50:40.324585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75603 ] 00:07:25.341 [2024-07-25 01:50:40.445362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.341 [2024-07-25 01:50:40.457299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.341 [2024-07-25 01:50:40.491052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.341 [2024-07-25 01:50:40.517516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.341 [2024-07-25 01:50:40.531063] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:25.341 [2024-07-25 01:50:40.531271] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:25.341 [2024-07-25 01:50:40.531339] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.341 [2024-07-25 01:50:40.585429] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.600 00:07:25.600 real 0m0.789s 00:07:25.600 user 0m0.405s 00:07:25.600 sys 0m0.175s 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:25.600 ************************************ 00:07:25.600 END TEST dd_flag_directory_forced_aio 00:07:25.600 ************************************ 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.600 ************************************ 00:07:25.600 START TEST dd_flag_nofollow_forced_aio 00:07:25.600 ************************************ 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.600 01:50:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.600 [2024-07-25 01:50:40.779254] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:25.600 [2024-07-25 01:50:40.779348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75631 ] 00:07:25.859 [2024-07-25 01:50:40.900011] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.859 [2024-07-25 01:50:40.916723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.859 [2024-07-25 01:50:40.950131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.859 [2024-07-25 01:50:40.978590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.859 [2024-07-25 01:50:40.992081] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.859 [2024-07-25 01:50:40.992138] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.859 [2024-07-25 01:50:40.992150] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.859 [2024-07-25 01:50:41.050172] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.859 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:26.119 [2024-07-25 01:50:41.184153] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:26.119 [2024-07-25 01:50:41.184243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75641 ] 00:07:26.119 [2024-07-25 01:50:41.304169] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.119 [2024-07-25 01:50:41.321335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.119 [2024-07-25 01:50:41.352321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.119 [2024-07-25 01:50:41.378269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.119 [2024-07-25 01:50:41.391447] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:26.119 [2024-07-25 01:50:41.391504] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:26.119 [2024-07-25 01:50:41.391518] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.378 [2024-07-25 01:50:41.446727] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.378 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.379 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.379 [2024-07-25 01:50:41.576988] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:26.379 [2024-07-25 01:50:41.577061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75648 ] 00:07:26.638 [2024-07-25 01:50:41.690755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.638 [2024-07-25 01:50:41.707701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.638 [2024-07-25 01:50:41.738775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.638 [2024-07-25 01:50:41.764506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.638  Copying: 512/512 [B] (average 500 kBps) 00:07:26.638 00:07:26.638 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 3rcpke1qvshfjfyfvro6ybe93n8b4gxw3g1uojcculqjh909da7d3wpcr12xjptw7v07wfazn9fuv3odwo8qpcmxqp1ozav04rlyk5i6dfkdb7j1vhfxkww12tuwt6h19g4jdmn9lntwvpirnghe7zolx8ro5sktmg4stup8g01447q1z6elplqh8ehvbwo67jn4rqqjqmnimb5g8lyjhq240wqz5dpwntvsf0w1cphbagpipl9xk5pqdbsflhi81h3dvfujb4i8i74xwnvl6qv591ctmjnytqz3zjezvteq0rj7btwlttlsqodwzpl6iakbh8jepo0vmr37jdz5fasbrgz259et1e9ot9jj5ptihi5bs7fkrfkp90jnfwes3f7zpoe14o2mij648ubfmr8kohh6g6oiuhmk2tbbdrtkad47ib9xx7cfs0u815pphz608lihdw9f81xczyzo8g77khaelw91cxx3asacjqwj32phns4jpyo5fsuwmbbr == \3\r\c\p\k\e\1\q\v\s\h\f\j\f\y\f\v\r\o\6\y\b\e\9\3\n\8\b\4\g\x\w\3\g\1\u\o\j\c\c\u\l\q\j\h\9\0\9\d\a\7\d\3\w\p\c\r\1\2\x\j\p\t\w\7\v\0\7\w\f\a\z\n\9\f\u\v\3\o\d\w\o\8\q\p\c\m\x\q\p\1\o\z\a\v\0\4\r\l\y\k\5\i\6\d\f\k\d\b\7\j\1\v\h\f\x\k\w\w\1\2\t\u\w\t\6\h\1\9\g\4\j\d\m\n\9\l\n\t\w\v\p\i\r\n\g\h\e\7\z\o\l\x\8\r\o\5\s\k\t\m\g\4\s\t\u\p\8\g\0\1\4\4\7\q\1\z\6\e\l\p\l\q\h\8\e\h\v\b\w\o\6\7\j\n\4\r\q\q\j\q\m\n\i\m\b\5\g\8\l\y\j\h\q\2\4\0\w\q\z\5\d\p\w\n\t\v\s\f\0\w\1\c\p\h\b\a\g\p\i\p\l\9\x\k\5\p\q\d\b\s\f\l\h\i\8\1\h\3\d\v\f\u\j\b\4\i\8\i\7\4\x\w\n\v\l\6\q\v\5\9\1\c\t\m\j\n\y\t\q\z\3\z\j\e\z\v\t\e\q\0\r\j\7\b\t\w\l\t\t\l\s\q\o\d\w\z\p\l\6\i\a\k\b\h\8\j\e\p\o\0\v\m\r\3\7\j\d\z\5\f\a\s\b\r\g\z\2\5\9\e\t\1\e\9\o\t\9\j\j\5\p\t\i\h\i\5\b\s\7\f\k\r\f\k\p\9\0\j\n\f\w\e\s\3\f\7\z\p\o\e\1\4\o\2\m\i\j\6\4\8\u\b\f\m\r\8\k\o\h\h\6\g\6\o\i\u\h\m\k\2\t\b\b\d\r\t\k\a\d\4\7\i\b\9\x\x\7\c\f\s\0\u\8\1\5\p\p\h\z\6\0\8\l\i\h\d\w\9\f\8\1\x\c\z\y\z\o\8\g\7\7\k\h\a\e\l\w\9\1\c\x\x\3\a\s\a\c\j\q\w\j\3\2\p\h\n\s\4\j\p\y\o\5\f\s\u\w\m\b\b\r ]] 00:07:26.638 00:07:26.638 real 0m1.210s 00:07:26.638 user 0m0.624s 00:07:26.638 sys 0m0.261s 00:07:26.638 ************************************ 00:07:26.638 END TEST dd_flag_nofollow_forced_aio 00:07:26.638 ************************************ 00:07:26.638 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.638 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:26.897 ************************************ 00:07:26.897 START TEST dd_flag_noatime_forced_aio 00:07:26.897 ************************************ 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721872241 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721872241 00:07:26.897 01:50:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:27.834 01:50:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.834 [2024-07-25 01:50:43.049133] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:27.834 [2024-07-25 01:50:43.049218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75689 ] 00:07:28.093 [2024-07-25 01:50:43.170577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.093 [2024-07-25 01:50:43.191303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.093 [2024-07-25 01:50:43.233667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.093 [2024-07-25 01:50:43.266916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.352  Copying: 512/512 [B] (average 500 kBps) 00:07:28.352 00:07:28.352 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.352 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721872241 )) 00:07:28.352 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.352 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721872241 )) 00:07:28.352 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.352 [2024-07-25 01:50:43.504560] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:28.352 [2024-07-25 01:50:43.504645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75695 ] 00:07:28.352 [2024-07-25 01:50:43.625138] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.352 [2024-07-25 01:50:43.643077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.611 [2024-07-25 01:50:43.676996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.611 [2024-07-25 01:50:43.704566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.611  Copying: 512/512 [B] (average 500 kBps) 00:07:28.611 00:07:28.611 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.611 ************************************ 00:07:28.611 END TEST dd_flag_noatime_forced_aio 00:07:28.611 ************************************ 00:07:28.611 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721872243 )) 00:07:28.611 00:07:28.611 real 0m1.898s 00:07:28.611 user 0m0.436s 00:07:28.611 sys 0m0.221s 00:07:28.611 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.611 01:50:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.870 ************************************ 00:07:28.870 START TEST dd_flags_misc_forced_aio 00:07:28.870 ************************************ 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.870 01:50:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:28.870 [2024-07-25 01:50:43.995972] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:28.870 [2024-07-25 01:50:43.996277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75721 ] 00:07:28.870 [2024-07-25 01:50:44.117916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.870 [2024-07-25 01:50:44.135369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.130 [2024-07-25 01:50:44.169299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.130 [2024-07-25 01:50:44.197555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.130  Copying: 512/512 [B] (average 500 kBps) 00:07:29.130 00:07:29.130 01:50:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hxb04tj1zr1cltndc3nwx3is2v9b2lbjccbeq4f6pn79juk6ext8rtehesj1udwfxac8lw1kqa6jtdralnaqw2em48nbex4zoevga6rpbc6goehy9yuwb41zw35pskr4jeabj0i4sgg23lj6rhuo7fg7k0bg10ui9fv3scl1kad5qkfa4c9hly7ex2rpw4nfv2unmt2alj7ixdjvtkt0444jtxpccgp2jg8gewrgjyvkowg3iwdy3npfz1t7vgyetsjoe4mzaj785yb6n4nvpa1dymihdh7il7za0oya549ifelml8zdhy8pcl6f9r2zu326xgbksubdx64fwsw3nwcj2eklnhku5imoyirmzlvzpepxudofvgnicp8r6pehxikzjbzdqz49sedwc6wydxoc0kuzhdyav4g28gsbjr24aqlt261z64vk0s33p1a66q4rhmbkgr03kzzxmxbj38uo21ph72s83wdkgjxlmejjdccqb13y6732ssygaz88 == \h\x\b\0\4\t\j\1\z\r\1\c\l\t\n\d\c\3\n\w\x\3\i\s\2\v\9\b\2\l\b\j\c\c\b\e\q\4\f\6\p\n\7\9\j\u\k\6\e\x\t\8\r\t\e\h\e\s\j\1\u\d\w\f\x\a\c\8\l\w\1\k\q\a\6\j\t\d\r\a\l\n\a\q\w\2\e\m\4\8\n\b\e\x\4\z\o\e\v\g\a\6\r\p\b\c\6\g\o\e\h\y\9\y\u\w\b\4\1\z\w\3\5\p\s\k\r\4\j\e\a\b\j\0\i\4\s\g\g\2\3\l\j\6\r\h\u\o\7\f\g\7\k\0\b\g\1\0\u\i\9\f\v\3\s\c\l\1\k\a\d\5\q\k\f\a\4\c\9\h\l\y\7\e\x\2\r\p\w\4\n\f\v\2\u\n\m\t\2\a\l\j\7\i\x\d\j\v\t\k\t\0\4\4\4\j\t\x\p\c\c\g\p\2\j\g\8\g\e\w\r\g\j\y\v\k\o\w\g\3\i\w\d\y\3\n\p\f\z\1\t\7\v\g\y\e\t\s\j\o\e\4\m\z\a\j\7\8\5\y\b\6\n\4\n\v\p\a\1\d\y\m\i\h\d\h\7\i\l\7\z\a\0\o\y\a\5\4\9\i\f\e\l\m\l\8\z\d\h\y\8\p\c\l\6\f\9\r\2\z\u\3\2\6\x\g\b\k\s\u\b\d\x\6\4\f\w\s\w\3\n\w\c\j\2\e\k\l\n\h\k\u\5\i\m\o\y\i\r\m\z\l\v\z\p\e\p\x\u\d\o\f\v\g\n\i\c\p\8\r\6\p\e\h\x\i\k\z\j\b\z\d\q\z\4\9\s\e\d\w\c\6\w\y\d\x\o\c\0\k\u\z\h\d\y\a\v\4\g\2\8\g\s\b\j\r\2\4\a\q\l\t\2\6\1\z\6\4\v\k\0\s\3\3\p\1\a\6\6\q\4\r\h\m\b\k\g\r\0\3\k\z\z\x\m\x\b\j\3\8\u\o\2\1\p\h\7\2\s\8\3\w\d\k\g\j\x\l\m\e\j\j\d\c\c\q\b\1\3\y\6\7\3\2\s\s\y\g\a\z\8\8 ]] 00:07:29.130 01:50:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.130 01:50:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:29.130 [2024-07-25 01:50:44.414779] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:29.130 [2024-07-25 01:50:44.414884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:07:29.388 [2024-07-25 01:50:44.535717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.388 [2024-07-25 01:50:44.552790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.388 [2024-07-25 01:50:44.585545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.388 [2024-07-25 01:50:44.613720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.646  Copying: 512/512 [B] (average 500 kBps) 00:07:29.646 00:07:29.646 01:50:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hxb04tj1zr1cltndc3nwx3is2v9b2lbjccbeq4f6pn79juk6ext8rtehesj1udwfxac8lw1kqa6jtdralnaqw2em48nbex4zoevga6rpbc6goehy9yuwb41zw35pskr4jeabj0i4sgg23lj6rhuo7fg7k0bg10ui9fv3scl1kad5qkfa4c9hly7ex2rpw4nfv2unmt2alj7ixdjvtkt0444jtxpccgp2jg8gewrgjyvkowg3iwdy3npfz1t7vgyetsjoe4mzaj785yb6n4nvpa1dymihdh7il7za0oya549ifelml8zdhy8pcl6f9r2zu326xgbksubdx64fwsw3nwcj2eklnhku5imoyirmzlvzpepxudofvgnicp8r6pehxikzjbzdqz49sedwc6wydxoc0kuzhdyav4g28gsbjr24aqlt261z64vk0s33p1a66q4rhmbkgr03kzzxmxbj38uo21ph72s83wdkgjxlmejjdccqb13y6732ssygaz88 == \h\x\b\0\4\t\j\1\z\r\1\c\l\t\n\d\c\3\n\w\x\3\i\s\2\v\9\b\2\l\b\j\c\c\b\e\q\4\f\6\p\n\7\9\j\u\k\6\e\x\t\8\r\t\e\h\e\s\j\1\u\d\w\f\x\a\c\8\l\w\1\k\q\a\6\j\t\d\r\a\l\n\a\q\w\2\e\m\4\8\n\b\e\x\4\z\o\e\v\g\a\6\r\p\b\c\6\g\o\e\h\y\9\y\u\w\b\4\1\z\w\3\5\p\s\k\r\4\j\e\a\b\j\0\i\4\s\g\g\2\3\l\j\6\r\h\u\o\7\f\g\7\k\0\b\g\1\0\u\i\9\f\v\3\s\c\l\1\k\a\d\5\q\k\f\a\4\c\9\h\l\y\7\e\x\2\r\p\w\4\n\f\v\2\u\n\m\t\2\a\l\j\7\i\x\d\j\v\t\k\t\0\4\4\4\j\t\x\p\c\c\g\p\2\j\g\8\g\e\w\r\g\j\y\v\k\o\w\g\3\i\w\d\y\3\n\p\f\z\1\t\7\v\g\y\e\t\s\j\o\e\4\m\z\a\j\7\8\5\y\b\6\n\4\n\v\p\a\1\d\y\m\i\h\d\h\7\i\l\7\z\a\0\o\y\a\5\4\9\i\f\e\l\m\l\8\z\d\h\y\8\p\c\l\6\f\9\r\2\z\u\3\2\6\x\g\b\k\s\u\b\d\x\6\4\f\w\s\w\3\n\w\c\j\2\e\k\l\n\h\k\u\5\i\m\o\y\i\r\m\z\l\v\z\p\e\p\x\u\d\o\f\v\g\n\i\c\p\8\r\6\p\e\h\x\i\k\z\j\b\z\d\q\z\4\9\s\e\d\w\c\6\w\y\d\x\o\c\0\k\u\z\h\d\y\a\v\4\g\2\8\g\s\b\j\r\2\4\a\q\l\t\2\6\1\z\6\4\v\k\0\s\3\3\p\1\a\6\6\q\4\r\h\m\b\k\g\r\0\3\k\z\z\x\m\x\b\j\3\8\u\o\2\1\p\h\7\2\s\8\3\w\d\k\g\j\x\l\m\e\j\j\d\c\c\q\b\1\3\y\6\7\3\2\s\s\y\g\a\z\8\8 ]] 00:07:29.646 01:50:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.646 01:50:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:29.646 [2024-07-25 01:50:44.821205] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:29.646 [2024-07-25 01:50:44.821300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75736 ] 00:07:29.646 [2024-07-25 01:50:44.935559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.905 [2024-07-25 01:50:44.951827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.905 [2024-07-25 01:50:44.985051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.905 [2024-07-25 01:50:45.012665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.905  Copying: 512/512 [B] (average 166 kBps) 00:07:29.905 00:07:29.905 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hxb04tj1zr1cltndc3nwx3is2v9b2lbjccbeq4f6pn79juk6ext8rtehesj1udwfxac8lw1kqa6jtdralnaqw2em48nbex4zoevga6rpbc6goehy9yuwb41zw35pskr4jeabj0i4sgg23lj6rhuo7fg7k0bg10ui9fv3scl1kad5qkfa4c9hly7ex2rpw4nfv2unmt2alj7ixdjvtkt0444jtxpccgp2jg8gewrgjyvkowg3iwdy3npfz1t7vgyetsjoe4mzaj785yb6n4nvpa1dymihdh7il7za0oya549ifelml8zdhy8pcl6f9r2zu326xgbksubdx64fwsw3nwcj2eklnhku5imoyirmzlvzpepxudofvgnicp8r6pehxikzjbzdqz49sedwc6wydxoc0kuzhdyav4g28gsbjr24aqlt261z64vk0s33p1a66q4rhmbkgr03kzzxmxbj38uo21ph72s83wdkgjxlmejjdccqb13y6732ssygaz88 == \h\x\b\0\4\t\j\1\z\r\1\c\l\t\n\d\c\3\n\w\x\3\i\s\2\v\9\b\2\l\b\j\c\c\b\e\q\4\f\6\p\n\7\9\j\u\k\6\e\x\t\8\r\t\e\h\e\s\j\1\u\d\w\f\x\a\c\8\l\w\1\k\q\a\6\j\t\d\r\a\l\n\a\q\w\2\e\m\4\8\n\b\e\x\4\z\o\e\v\g\a\6\r\p\b\c\6\g\o\e\h\y\9\y\u\w\b\4\1\z\w\3\5\p\s\k\r\4\j\e\a\b\j\0\i\4\s\g\g\2\3\l\j\6\r\h\u\o\7\f\g\7\k\0\b\g\1\0\u\i\9\f\v\3\s\c\l\1\k\a\d\5\q\k\f\a\4\c\9\h\l\y\7\e\x\2\r\p\w\4\n\f\v\2\u\n\m\t\2\a\l\j\7\i\x\d\j\v\t\k\t\0\4\4\4\j\t\x\p\c\c\g\p\2\j\g\8\g\e\w\r\g\j\y\v\k\o\w\g\3\i\w\d\y\3\n\p\f\z\1\t\7\v\g\y\e\t\s\j\o\e\4\m\z\a\j\7\8\5\y\b\6\n\4\n\v\p\a\1\d\y\m\i\h\d\h\7\i\l\7\z\a\0\o\y\a\5\4\9\i\f\e\l\m\l\8\z\d\h\y\8\p\c\l\6\f\9\r\2\z\u\3\2\6\x\g\b\k\s\u\b\d\x\6\4\f\w\s\w\3\n\w\c\j\2\e\k\l\n\h\k\u\5\i\m\o\y\i\r\m\z\l\v\z\p\e\p\x\u\d\o\f\v\g\n\i\c\p\8\r\6\p\e\h\x\i\k\z\j\b\z\d\q\z\4\9\s\e\d\w\c\6\w\y\d\x\o\c\0\k\u\z\h\d\y\a\v\4\g\2\8\g\s\b\j\r\2\4\a\q\l\t\2\6\1\z\6\4\v\k\0\s\3\3\p\1\a\6\6\q\4\r\h\m\b\k\g\r\0\3\k\z\z\x\m\x\b\j\3\8\u\o\2\1\p\h\7\2\s\8\3\w\d\k\g\j\x\l\m\e\j\j\d\c\c\q\b\1\3\y\6\7\3\2\s\s\y\g\a\z\8\8 ]] 00:07:29.905 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.905 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:30.163 [2024-07-25 01:50:45.247522] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:30.163 [2024-07-25 01:50:45.247641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75744 ] 00:07:30.164 [2024-07-25 01:50:45.367959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.164 [2024-07-25 01:50:45.384534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.164 [2024-07-25 01:50:45.417083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.164 [2024-07-25 01:50:45.444714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.423  Copying: 512/512 [B] (average 500 kBps) 00:07:30.423 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hxb04tj1zr1cltndc3nwx3is2v9b2lbjccbeq4f6pn79juk6ext8rtehesj1udwfxac8lw1kqa6jtdralnaqw2em48nbex4zoevga6rpbc6goehy9yuwb41zw35pskr4jeabj0i4sgg23lj6rhuo7fg7k0bg10ui9fv3scl1kad5qkfa4c9hly7ex2rpw4nfv2unmt2alj7ixdjvtkt0444jtxpccgp2jg8gewrgjyvkowg3iwdy3npfz1t7vgyetsjoe4mzaj785yb6n4nvpa1dymihdh7il7za0oya549ifelml8zdhy8pcl6f9r2zu326xgbksubdx64fwsw3nwcj2eklnhku5imoyirmzlvzpepxudofvgnicp8r6pehxikzjbzdqz49sedwc6wydxoc0kuzhdyav4g28gsbjr24aqlt261z64vk0s33p1a66q4rhmbkgr03kzzxmxbj38uo21ph72s83wdkgjxlmejjdccqb13y6732ssygaz88 == \h\x\b\0\4\t\j\1\z\r\1\c\l\t\n\d\c\3\n\w\x\3\i\s\2\v\9\b\2\l\b\j\c\c\b\e\q\4\f\6\p\n\7\9\j\u\k\6\e\x\t\8\r\t\e\h\e\s\j\1\u\d\w\f\x\a\c\8\l\w\1\k\q\a\6\j\t\d\r\a\l\n\a\q\w\2\e\m\4\8\n\b\e\x\4\z\o\e\v\g\a\6\r\p\b\c\6\g\o\e\h\y\9\y\u\w\b\4\1\z\w\3\5\p\s\k\r\4\j\e\a\b\j\0\i\4\s\g\g\2\3\l\j\6\r\h\u\o\7\f\g\7\k\0\b\g\1\0\u\i\9\f\v\3\s\c\l\1\k\a\d\5\q\k\f\a\4\c\9\h\l\y\7\e\x\2\r\p\w\4\n\f\v\2\u\n\m\t\2\a\l\j\7\i\x\d\j\v\t\k\t\0\4\4\4\j\t\x\p\c\c\g\p\2\j\g\8\g\e\w\r\g\j\y\v\k\o\w\g\3\i\w\d\y\3\n\p\f\z\1\t\7\v\g\y\e\t\s\j\o\e\4\m\z\a\j\7\8\5\y\b\6\n\4\n\v\p\a\1\d\y\m\i\h\d\h\7\i\l\7\z\a\0\o\y\a\5\4\9\i\f\e\l\m\l\8\z\d\h\y\8\p\c\l\6\f\9\r\2\z\u\3\2\6\x\g\b\k\s\u\b\d\x\6\4\f\w\s\w\3\n\w\c\j\2\e\k\l\n\h\k\u\5\i\m\o\y\i\r\m\z\l\v\z\p\e\p\x\u\d\o\f\v\g\n\i\c\p\8\r\6\p\e\h\x\i\k\z\j\b\z\d\q\z\4\9\s\e\d\w\c\6\w\y\d\x\o\c\0\k\u\z\h\d\y\a\v\4\g\2\8\g\s\b\j\r\2\4\a\q\l\t\2\6\1\z\6\4\v\k\0\s\3\3\p\1\a\6\6\q\4\r\h\m\b\k\g\r\0\3\k\z\z\x\m\x\b\j\3\8\u\o\2\1\p\h\7\2\s\8\3\w\d\k\g\j\x\l\m\e\j\j\d\c\c\q\b\1\3\y\6\7\3\2\s\s\y\g\a\z\8\8 ]] 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.423 01:50:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:30.423 [2024-07-25 01:50:45.672777] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:30.423 [2024-07-25 01:50:45.672888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75746 ] 00:07:30.682 [2024-07-25 01:50:45.793578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.682 [2024-07-25 01:50:45.811773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.682 [2024-07-25 01:50:45.844600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.682 [2024-07-25 01:50:45.872765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.941  Copying: 512/512 [B] (average 500 kBps) 00:07:30.941 00:07:30.941 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ex3zcsbbbn10vt873j4o6s2m6jjk0t4jj68gh3room648k855t47qqj5gjbkrtm7mi4y4jhuamaglnp4z0uu9nrg9u6piendmm1jmjozy7oj3g1tysvh9us6box5m81f5pm1we08zp5txivfp7y5dm690jbbgqj38s0h1azzgcfilbklb71jznyym130hvi9q30xxqxfx482l0cmudm3smlesqqu0i6dkitbp4i4ke3qfulibt0np79mpvsxeztia1r5hxbo1fhdsxpt80313w0tm4cttq2swzpnlcxy97jkury98arx5wyjsyujtpin48n9dy6kgsuykribrks8rm604he4y9n10b8nfv2gjm4gopw8yapyvnct2fvjj8eygpx68ljbt0nvu0iu971qjlfff9dtru9qsod1e7vwhbl2rbhfs0bemmjil1vinzxnoyxol8n7z94wsj5gr1fsfjs2jihwtbyenahdnmpmzi1so5uaqmogt6l3sweyj96c == \e\x\3\z\c\s\b\b\b\n\1\0\v\t\8\7\3\j\4\o\6\s\2\m\6\j\j\k\0\t\4\j\j\6\8\g\h\3\r\o\o\m\6\4\8\k\8\5\5\t\4\7\q\q\j\5\g\j\b\k\r\t\m\7\m\i\4\y\4\j\h\u\a\m\a\g\l\n\p\4\z\0\u\u\9\n\r\g\9\u\6\p\i\e\n\d\m\m\1\j\m\j\o\z\y\7\o\j\3\g\1\t\y\s\v\h\9\u\s\6\b\o\x\5\m\8\1\f\5\p\m\1\w\e\0\8\z\p\5\t\x\i\v\f\p\7\y\5\d\m\6\9\0\j\b\b\g\q\j\3\8\s\0\h\1\a\z\z\g\c\f\i\l\b\k\l\b\7\1\j\z\n\y\y\m\1\3\0\h\v\i\9\q\3\0\x\x\q\x\f\x\4\8\2\l\0\c\m\u\d\m\3\s\m\l\e\s\q\q\u\0\i\6\d\k\i\t\b\p\4\i\4\k\e\3\q\f\u\l\i\b\t\0\n\p\7\9\m\p\v\s\x\e\z\t\i\a\1\r\5\h\x\b\o\1\f\h\d\s\x\p\t\8\0\3\1\3\w\0\t\m\4\c\t\t\q\2\s\w\z\p\n\l\c\x\y\9\7\j\k\u\r\y\9\8\a\r\x\5\w\y\j\s\y\u\j\t\p\i\n\4\8\n\9\d\y\6\k\g\s\u\y\k\r\i\b\r\k\s\8\r\m\6\0\4\h\e\4\y\9\n\1\0\b\8\n\f\v\2\g\j\m\4\g\o\p\w\8\y\a\p\y\v\n\c\t\2\f\v\j\j\8\e\y\g\p\x\6\8\l\j\b\t\0\n\v\u\0\i\u\9\7\1\q\j\l\f\f\f\9\d\t\r\u\9\q\s\o\d\1\e\7\v\w\h\b\l\2\r\b\h\f\s\0\b\e\m\m\j\i\l\1\v\i\n\z\x\n\o\y\x\o\l\8\n\7\z\9\4\w\s\j\5\g\r\1\f\s\f\j\s\2\j\i\h\w\t\b\y\e\n\a\h\d\n\m\p\m\z\i\1\s\o\5\u\a\q\m\o\g\t\6\l\3\s\w\e\y\j\9\6\c ]] 00:07:30.941 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.941 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:30.941 [2024-07-25 01:50:46.070920] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:30.941 [2024-07-25 01:50:46.071002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75753 ] 00:07:30.941 [2024-07-25 01:50:46.191722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.941 [2024-07-25 01:50:46.207761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.202 [2024-07-25 01:50:46.241097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.202 [2024-07-25 01:50:46.273660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.202  Copying: 512/512 [B] (average 500 kBps) 00:07:31.202 00:07:31.202 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ex3zcsbbbn10vt873j4o6s2m6jjk0t4jj68gh3room648k855t47qqj5gjbkrtm7mi4y4jhuamaglnp4z0uu9nrg9u6piendmm1jmjozy7oj3g1tysvh9us6box5m81f5pm1we08zp5txivfp7y5dm690jbbgqj38s0h1azzgcfilbklb71jznyym130hvi9q30xxqxfx482l0cmudm3smlesqqu0i6dkitbp4i4ke3qfulibt0np79mpvsxeztia1r5hxbo1fhdsxpt80313w0tm4cttq2swzpnlcxy97jkury98arx5wyjsyujtpin48n9dy6kgsuykribrks8rm604he4y9n10b8nfv2gjm4gopw8yapyvnct2fvjj8eygpx68ljbt0nvu0iu971qjlfff9dtru9qsod1e7vwhbl2rbhfs0bemmjil1vinzxnoyxol8n7z94wsj5gr1fsfjs2jihwtbyenahdnmpmzi1so5uaqmogt6l3sweyj96c == \e\x\3\z\c\s\b\b\b\n\1\0\v\t\8\7\3\j\4\o\6\s\2\m\6\j\j\k\0\t\4\j\j\6\8\g\h\3\r\o\o\m\6\4\8\k\8\5\5\t\4\7\q\q\j\5\g\j\b\k\r\t\m\7\m\i\4\y\4\j\h\u\a\m\a\g\l\n\p\4\z\0\u\u\9\n\r\g\9\u\6\p\i\e\n\d\m\m\1\j\m\j\o\z\y\7\o\j\3\g\1\t\y\s\v\h\9\u\s\6\b\o\x\5\m\8\1\f\5\p\m\1\w\e\0\8\z\p\5\t\x\i\v\f\p\7\y\5\d\m\6\9\0\j\b\b\g\q\j\3\8\s\0\h\1\a\z\z\g\c\f\i\l\b\k\l\b\7\1\j\z\n\y\y\m\1\3\0\h\v\i\9\q\3\0\x\x\q\x\f\x\4\8\2\l\0\c\m\u\d\m\3\s\m\l\e\s\q\q\u\0\i\6\d\k\i\t\b\p\4\i\4\k\e\3\q\f\u\l\i\b\t\0\n\p\7\9\m\p\v\s\x\e\z\t\i\a\1\r\5\h\x\b\o\1\f\h\d\s\x\p\t\8\0\3\1\3\w\0\t\m\4\c\t\t\q\2\s\w\z\p\n\l\c\x\y\9\7\j\k\u\r\y\9\8\a\r\x\5\w\y\j\s\y\u\j\t\p\i\n\4\8\n\9\d\y\6\k\g\s\u\y\k\r\i\b\r\k\s\8\r\m\6\0\4\h\e\4\y\9\n\1\0\b\8\n\f\v\2\g\j\m\4\g\o\p\w\8\y\a\p\y\v\n\c\t\2\f\v\j\j\8\e\y\g\p\x\6\8\l\j\b\t\0\n\v\u\0\i\u\9\7\1\q\j\l\f\f\f\9\d\t\r\u\9\q\s\o\d\1\e\7\v\w\h\b\l\2\r\b\h\f\s\0\b\e\m\m\j\i\l\1\v\i\n\z\x\n\o\y\x\o\l\8\n\7\z\9\4\w\s\j\5\g\r\1\f\s\f\j\s\2\j\i\h\w\t\b\y\e\n\a\h\d\n\m\p\m\z\i\1\s\o\5\u\a\q\m\o\g\t\6\l\3\s\w\e\y\j\9\6\c ]] 00:07:31.202 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.202 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:31.202 [2024-07-25 01:50:46.488081] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:31.202 [2024-07-25 01:50:46.488166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75761 ] 00:07:31.462 [2024-07-25 01:50:46.608768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.462 [2024-07-25 01:50:46.625772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.462 [2024-07-25 01:50:46.659044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.462 [2024-07-25 01:50:46.686804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.720  Copying: 512/512 [B] (average 250 kBps) 00:07:31.720 00:07:31.721 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ex3zcsbbbn10vt873j4o6s2m6jjk0t4jj68gh3room648k855t47qqj5gjbkrtm7mi4y4jhuamaglnp4z0uu9nrg9u6piendmm1jmjozy7oj3g1tysvh9us6box5m81f5pm1we08zp5txivfp7y5dm690jbbgqj38s0h1azzgcfilbklb71jznyym130hvi9q30xxqxfx482l0cmudm3smlesqqu0i6dkitbp4i4ke3qfulibt0np79mpvsxeztia1r5hxbo1fhdsxpt80313w0tm4cttq2swzpnlcxy97jkury98arx5wyjsyujtpin48n9dy6kgsuykribrks8rm604he4y9n10b8nfv2gjm4gopw8yapyvnct2fvjj8eygpx68ljbt0nvu0iu971qjlfff9dtru9qsod1e7vwhbl2rbhfs0bemmjil1vinzxnoyxol8n7z94wsj5gr1fsfjs2jihwtbyenahdnmpmzi1so5uaqmogt6l3sweyj96c == \e\x\3\z\c\s\b\b\b\n\1\0\v\t\8\7\3\j\4\o\6\s\2\m\6\j\j\k\0\t\4\j\j\6\8\g\h\3\r\o\o\m\6\4\8\k\8\5\5\t\4\7\q\q\j\5\g\j\b\k\r\t\m\7\m\i\4\y\4\j\h\u\a\m\a\g\l\n\p\4\z\0\u\u\9\n\r\g\9\u\6\p\i\e\n\d\m\m\1\j\m\j\o\z\y\7\o\j\3\g\1\t\y\s\v\h\9\u\s\6\b\o\x\5\m\8\1\f\5\p\m\1\w\e\0\8\z\p\5\t\x\i\v\f\p\7\y\5\d\m\6\9\0\j\b\b\g\q\j\3\8\s\0\h\1\a\z\z\g\c\f\i\l\b\k\l\b\7\1\j\z\n\y\y\m\1\3\0\h\v\i\9\q\3\0\x\x\q\x\f\x\4\8\2\l\0\c\m\u\d\m\3\s\m\l\e\s\q\q\u\0\i\6\d\k\i\t\b\p\4\i\4\k\e\3\q\f\u\l\i\b\t\0\n\p\7\9\m\p\v\s\x\e\z\t\i\a\1\r\5\h\x\b\o\1\f\h\d\s\x\p\t\8\0\3\1\3\w\0\t\m\4\c\t\t\q\2\s\w\z\p\n\l\c\x\y\9\7\j\k\u\r\y\9\8\a\r\x\5\w\y\j\s\y\u\j\t\p\i\n\4\8\n\9\d\y\6\k\g\s\u\y\k\r\i\b\r\k\s\8\r\m\6\0\4\h\e\4\y\9\n\1\0\b\8\n\f\v\2\g\j\m\4\g\o\p\w\8\y\a\p\y\v\n\c\t\2\f\v\j\j\8\e\y\g\p\x\6\8\l\j\b\t\0\n\v\u\0\i\u\9\7\1\q\j\l\f\f\f\9\d\t\r\u\9\q\s\o\d\1\e\7\v\w\h\b\l\2\r\b\h\f\s\0\b\e\m\m\j\i\l\1\v\i\n\z\x\n\o\y\x\o\l\8\n\7\z\9\4\w\s\j\5\g\r\1\f\s\f\j\s\2\j\i\h\w\t\b\y\e\n\a\h\d\n\m\p\m\z\i\1\s\o\5\u\a\q\m\o\g\t\6\l\3\s\w\e\y\j\9\6\c ]] 00:07:31.721 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.721 01:50:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:31.721 [2024-07-25 01:50:46.902107] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:31.721 [2024-07-25 01:50:46.902203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75768 ] 00:07:31.721 [2024-07-25 01:50:47.016268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.980 [2024-07-25 01:50:47.029604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.980 [2024-07-25 01:50:47.063805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.980 [2024-07-25 01:50:47.094689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.980  Copying: 512/512 [B] (average 500 kBps) 00:07:31.980 00:07:31.980 01:50:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ex3zcsbbbn10vt873j4o6s2m6jjk0t4jj68gh3room648k855t47qqj5gjbkrtm7mi4y4jhuamaglnp4z0uu9nrg9u6piendmm1jmjozy7oj3g1tysvh9us6box5m81f5pm1we08zp5txivfp7y5dm690jbbgqj38s0h1azzgcfilbklb71jznyym130hvi9q30xxqxfx482l0cmudm3smlesqqu0i6dkitbp4i4ke3qfulibt0np79mpvsxeztia1r5hxbo1fhdsxpt80313w0tm4cttq2swzpnlcxy97jkury98arx5wyjsyujtpin48n9dy6kgsuykribrks8rm604he4y9n10b8nfv2gjm4gopw8yapyvnct2fvjj8eygpx68ljbt0nvu0iu971qjlfff9dtru9qsod1e7vwhbl2rbhfs0bemmjil1vinzxnoyxol8n7z94wsj5gr1fsfjs2jihwtbyenahdnmpmzi1so5uaqmogt6l3sweyj96c == \e\x\3\z\c\s\b\b\b\n\1\0\v\t\8\7\3\j\4\o\6\s\2\m\6\j\j\k\0\t\4\j\j\6\8\g\h\3\r\o\o\m\6\4\8\k\8\5\5\t\4\7\q\q\j\5\g\j\b\k\r\t\m\7\m\i\4\y\4\j\h\u\a\m\a\g\l\n\p\4\z\0\u\u\9\n\r\g\9\u\6\p\i\e\n\d\m\m\1\j\m\j\o\z\y\7\o\j\3\g\1\t\y\s\v\h\9\u\s\6\b\o\x\5\m\8\1\f\5\p\m\1\w\e\0\8\z\p\5\t\x\i\v\f\p\7\y\5\d\m\6\9\0\j\b\b\g\q\j\3\8\s\0\h\1\a\z\z\g\c\f\i\l\b\k\l\b\7\1\j\z\n\y\y\m\1\3\0\h\v\i\9\q\3\0\x\x\q\x\f\x\4\8\2\l\0\c\m\u\d\m\3\s\m\l\e\s\q\q\u\0\i\6\d\k\i\t\b\p\4\i\4\k\e\3\q\f\u\l\i\b\t\0\n\p\7\9\m\p\v\s\x\e\z\t\i\a\1\r\5\h\x\b\o\1\f\h\d\s\x\p\t\8\0\3\1\3\w\0\t\m\4\c\t\t\q\2\s\w\z\p\n\l\c\x\y\9\7\j\k\u\r\y\9\8\a\r\x\5\w\y\j\s\y\u\j\t\p\i\n\4\8\n\9\d\y\6\k\g\s\u\y\k\r\i\b\r\k\s\8\r\m\6\0\4\h\e\4\y\9\n\1\0\b\8\n\f\v\2\g\j\m\4\g\o\p\w\8\y\a\p\y\v\n\c\t\2\f\v\j\j\8\e\y\g\p\x\6\8\l\j\b\t\0\n\v\u\0\i\u\9\7\1\q\j\l\f\f\f\9\d\t\r\u\9\q\s\o\d\1\e\7\v\w\h\b\l\2\r\b\h\f\s\0\b\e\m\m\j\i\l\1\v\i\n\z\x\n\o\y\x\o\l\8\n\7\z\9\4\w\s\j\5\g\r\1\f\s\f\j\s\2\j\i\h\w\t\b\y\e\n\a\h\d\n\m\p\m\z\i\1\s\o\5\u\a\q\m\o\g\t\6\l\3\s\w\e\y\j\9\6\c ]] 00:07:31.980 00:07:31.980 real 0m3.339s 00:07:31.980 user 0m1.638s 00:07:31.980 sys 0m0.723s 00:07:31.980 ************************************ 00:07:31.980 END TEST dd_flags_misc_forced_aio 00:07:31.980 01:50:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.980 01:50:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.980 ************************************ 00:07:32.239 01:50:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:32.239 01:50:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:32.239 01:50:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:32.239 00:07:32.239 real 0m15.590s 00:07:32.239 user 0m6.681s 00:07:32.239 sys 0m4.136s 00:07:32.239 01:50:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.239 ************************************ 00:07:32.239 END TEST spdk_dd_posix 00:07:32.239 ************************************ 00:07:32.239 01:50:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:32.239 01:50:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:32.239 01:50:47 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.239 01:50:47 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.239 01:50:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:32.239 ************************************ 00:07:32.239 START TEST spdk_dd_malloc 00:07:32.239 ************************************ 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:32.240 * Looking for test storage... 00:07:32.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:32.240 ************************************ 00:07:32.240 START TEST dd_malloc_copy 00:07:32.240 ************************************ 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.240 01:50:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.240 [2024-07-25 01:50:47.522574] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:32.240 [2024-07-25 01:50:47.523189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75837 ] 00:07:32.240 { 00:07:32.240 "subsystems": [ 00:07:32.240 { 00:07:32.240 "subsystem": "bdev", 00:07:32.240 "config": [ 00:07:32.240 { 00:07:32.240 "params": { 00:07:32.240 "block_size": 512, 00:07:32.240 "num_blocks": 1048576, 00:07:32.240 "name": "malloc0" 00:07:32.240 }, 00:07:32.240 "method": "bdev_malloc_create" 00:07:32.240 }, 00:07:32.240 { 00:07:32.240 "params": { 00:07:32.240 "block_size": 512, 00:07:32.240 "num_blocks": 1048576, 00:07:32.240 "name": "malloc1" 00:07:32.240 }, 00:07:32.240 "method": "bdev_malloc_create" 00:07:32.240 }, 00:07:32.240 { 00:07:32.240 "method": "bdev_wait_for_examine" 00:07:32.240 } 00:07:32.240 ] 00:07:32.240 } 00:07:32.240 ] 00:07:32.240 } 00:07:32.499 [2024-07-25 01:50:47.644609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.499 [2024-07-25 01:50:47.661902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.499 [2024-07-25 01:50:47.695312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.499 [2024-07-25 01:50:47.723494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.096  Copying: 239/512 [MB] (239 MBps) Copying: 479/512 [MB] (240 MBps) Copying: 512/512 [MB] (average 239 MBps) 00:07:35.096 00:07:35.096 01:50:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:35.096 01:50:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:35.096 01:50:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:35.096 01:50:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.355 [2024-07-25 01:50:50.435293] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:35.355 [2024-07-25 01:50:50.435396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75879 ] 00:07:35.355 { 00:07:35.355 "subsystems": [ 00:07:35.355 { 00:07:35.355 "subsystem": "bdev", 00:07:35.355 "config": [ 00:07:35.355 { 00:07:35.355 "params": { 00:07:35.355 "block_size": 512, 00:07:35.355 "num_blocks": 1048576, 00:07:35.355 "name": "malloc0" 00:07:35.355 }, 00:07:35.355 "method": "bdev_malloc_create" 00:07:35.355 }, 00:07:35.355 { 00:07:35.355 "params": { 00:07:35.355 "block_size": 512, 00:07:35.355 "num_blocks": 1048576, 00:07:35.355 "name": "malloc1" 00:07:35.355 }, 00:07:35.355 "method": "bdev_malloc_create" 00:07:35.355 }, 00:07:35.355 { 00:07:35.355 "method": "bdev_wait_for_examine" 00:07:35.355 } 00:07:35.355 ] 00:07:35.355 } 00:07:35.355 ] 00:07:35.355 } 00:07:35.355 [2024-07-25 01:50:50.555503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.355 [2024-07-25 01:50:50.572364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.355 [2024-07-25 01:50:50.605016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.355 [2024-07-25 01:50:50.633219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.240  Copying: 238/512 [MB] (238 MBps) Copying: 478/512 [MB] (239 MBps) Copying: 512/512 [MB] (average 239 MBps) 00:07:38.240 00:07:38.240 00:07:38.240 real 0m5.811s 00:07:38.240 user 0m5.186s 00:07:38.240 sys 0m0.480s 00:07:38.240 01:50:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.241 ************************************ 00:07:38.241 END TEST dd_malloc_copy 00:07:38.241 ************************************ 00:07:38.241 01:50:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.241 00:07:38.241 real 0m5.948s 00:07:38.241 user 0m5.243s 00:07:38.241 sys 0m0.559s 00:07:38.241 01:50:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.241 01:50:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:38.241 ************************************ 00:07:38.241 END TEST spdk_dd_malloc 00:07:38.241 ************************************ 00:07:38.241 01:50:53 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:38.241 01:50:53 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:38.241 01:50:53 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.241 01:50:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:38.241 ************************************ 00:07:38.241 START TEST spdk_dd_bdev_to_bdev 00:07:38.241 ************************************ 00:07:38.241 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:38.241 * Looking for test storage... 00:07:38.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:38.241 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.241 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.241 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.241 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.241 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:38.242 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:38.243 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.243 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.243 ************************************ 00:07:38.243 START TEST dd_inflate_file 00:07:38.243 ************************************ 00:07:38.243 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:38.243 [2024-07-25 01:50:53.524610] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:38.243 [2024-07-25 01:50:53.524709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75978 ] 00:07:38.503 [2024-07-25 01:50:53.645364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.503 [2024-07-25 01:50:53.663154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.503 [2024-07-25 01:50:53.696077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.503 [2024-07-25 01:50:53.723338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.762  Copying: 64/64 [MB] (average 1777 MBps) 00:07:38.762 00:07:38.762 00:07:38.762 real 0m0.421s 00:07:38.762 user 0m0.229s 00:07:38.762 sys 0m0.202s 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:38.762 ************************************ 00:07:38.762 END TEST dd_inflate_file 00:07:38.762 ************************************ 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.762 ************************************ 00:07:38.762 START TEST dd_copy_to_out_bdev 00:07:38.762 ************************************ 00:07:38.762 01:50:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:38.762 [2024-07-25 01:50:53.999166] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:38.762 [2024-07-25 01:50:53.999277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:07:38.762 { 00:07:38.762 "subsystems": [ 00:07:38.762 { 00:07:38.762 "subsystem": "bdev", 00:07:38.762 "config": [ 00:07:38.762 { 00:07:38.762 "params": { 00:07:38.762 "trtype": "pcie", 00:07:38.762 "traddr": "0000:00:10.0", 00:07:38.762 "name": "Nvme0" 00:07:38.762 }, 00:07:38.762 "method": "bdev_nvme_attach_controller" 00:07:38.762 }, 00:07:38.762 { 00:07:38.762 "params": { 00:07:38.762 "trtype": "pcie", 00:07:38.762 "traddr": "0000:00:11.0", 00:07:38.762 "name": "Nvme1" 00:07:38.762 }, 00:07:38.762 "method": "bdev_nvme_attach_controller" 00:07:38.762 }, 00:07:38.762 { 00:07:38.762 "method": "bdev_wait_for_examine" 00:07:38.762 } 00:07:38.762 ] 00:07:38.762 } 00:07:38.762 ] 00:07:38.762 } 00:07:39.021 [2024-07-25 01:50:54.120323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.021 [2024-07-25 01:50:54.136350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.021 [2024-07-25 01:50:54.169276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.021 [2024-07-25 01:50:54.197300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.658  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 52 MBps) 00:07:40.658 00:07:40.658 00:07:40.658 real 0m1.754s 00:07:40.658 user 0m1.594s 00:07:40.658 sys 0m1.412s 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.658 ************************************ 00:07:40.658 END TEST dd_copy_to_out_bdev 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 ************************************ 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 ************************************ 00:07:40.658 START TEST dd_offset_magic 00:07:40.658 ************************************ 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:40.658 01:50:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 [2024-07-25 01:50:55.809044] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:40.658 [2024-07-25 01:50:55.809150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:07:40.658 { 00:07:40.658 "subsystems": [ 00:07:40.658 { 00:07:40.658 "subsystem": "bdev", 00:07:40.658 "config": [ 00:07:40.658 { 00:07:40.658 "params": { 00:07:40.658 "trtype": "pcie", 00:07:40.658 "traddr": "0000:00:10.0", 00:07:40.658 "name": "Nvme0" 00:07:40.658 }, 00:07:40.658 "method": "bdev_nvme_attach_controller" 00:07:40.658 }, 00:07:40.658 { 00:07:40.658 "params": { 00:07:40.658 "trtype": "pcie", 00:07:40.658 "traddr": "0000:00:11.0", 00:07:40.658 "name": "Nvme1" 00:07:40.658 }, 00:07:40.658 "method": "bdev_nvme_attach_controller" 00:07:40.658 }, 00:07:40.658 { 00:07:40.658 "method": "bdev_wait_for_examine" 00:07:40.658 } 00:07:40.658 ] 00:07:40.658 } 00:07:40.658 ] 00:07:40.658 } 00:07:40.658 [2024-07-25 01:50:55.929768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.658 [2024-07-25 01:50:55.948024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.918 [2024-07-25 01:50:55.982348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.918 [2024-07-25 01:50:56.010605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.178  Copying: 65/65 [MB] (average 928 MBps) 00:07:41.178 00:07:41.178 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:41.178 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:41.178 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.178 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.178 [2024-07-25 01:50:56.458552] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:41.178 [2024-07-25 01:50:56.458669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76071 ] 00:07:41.178 { 00:07:41.178 "subsystems": [ 00:07:41.178 { 00:07:41.178 "subsystem": "bdev", 00:07:41.178 "config": [ 00:07:41.178 { 00:07:41.178 "params": { 00:07:41.178 "trtype": "pcie", 00:07:41.178 "traddr": "0000:00:10.0", 00:07:41.178 "name": "Nvme0" 00:07:41.178 }, 00:07:41.178 "method": "bdev_nvme_attach_controller" 00:07:41.178 }, 00:07:41.178 { 00:07:41.178 "params": { 00:07:41.178 "trtype": "pcie", 00:07:41.178 "traddr": "0000:00:11.0", 00:07:41.178 "name": "Nvme1" 00:07:41.178 }, 00:07:41.178 "method": "bdev_nvme_attach_controller" 00:07:41.178 }, 00:07:41.178 { 00:07:41.178 "method": "bdev_wait_for_examine" 00:07:41.178 } 00:07:41.178 ] 00:07:41.178 } 00:07:41.178 ] 00:07:41.178 } 00:07:41.438 [2024-07-25 01:50:56.579728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.438 [2024-07-25 01:50:56.596418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.438 [2024-07-25 01:50:56.633005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.438 [2024-07-25 01:50:56.664038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.697  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:41.697 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.697 01:50:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.957 [2024-07-25 01:50:56.998941] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:41.957 [2024-07-25 01:50:56.999048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76082 ] 00:07:41.957 { 00:07:41.957 "subsystems": [ 00:07:41.957 { 00:07:41.957 "subsystem": "bdev", 00:07:41.957 "config": [ 00:07:41.957 { 00:07:41.957 "params": { 00:07:41.957 "trtype": "pcie", 00:07:41.957 "traddr": "0000:00:10.0", 00:07:41.957 "name": "Nvme0" 00:07:41.957 }, 00:07:41.957 "method": "bdev_nvme_attach_controller" 00:07:41.957 }, 00:07:41.957 { 00:07:41.957 "params": { 00:07:41.957 "trtype": "pcie", 00:07:41.957 "traddr": "0000:00:11.0", 00:07:41.957 "name": "Nvme1" 00:07:41.957 }, 00:07:41.957 "method": "bdev_nvme_attach_controller" 00:07:41.957 }, 00:07:41.957 { 00:07:41.957 "method": "bdev_wait_for_examine" 00:07:41.957 } 00:07:41.957 ] 00:07:41.957 } 00:07:41.957 ] 00:07:41.957 } 00:07:41.957 [2024-07-25 01:50:57.119371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.957 [2024-07-25 01:50:57.135384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.957 [2024-07-25 01:50:57.168253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.957 [2024-07-25 01:50:57.196494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.474  Copying: 65/65 [MB] (average 1000 MBps) 00:07:42.474 00:07:42.474 01:50:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:42.474 01:50:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:42.474 01:50:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:42.474 01:50:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:42.474 [2024-07-25 01:50:57.637961] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:42.474 [2024-07-25 01:50:57.638061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76102 ] 00:07:42.474 { 00:07:42.474 "subsystems": [ 00:07:42.474 { 00:07:42.474 "subsystem": "bdev", 00:07:42.474 "config": [ 00:07:42.474 { 00:07:42.474 "params": { 00:07:42.474 "trtype": "pcie", 00:07:42.474 "traddr": "0000:00:10.0", 00:07:42.474 "name": "Nvme0" 00:07:42.474 }, 00:07:42.474 "method": "bdev_nvme_attach_controller" 00:07:42.474 }, 00:07:42.474 { 00:07:42.474 "params": { 00:07:42.474 "trtype": "pcie", 00:07:42.474 "traddr": "0000:00:11.0", 00:07:42.474 "name": "Nvme1" 00:07:42.474 }, 00:07:42.474 "method": "bdev_nvme_attach_controller" 00:07:42.474 }, 00:07:42.474 { 00:07:42.474 "method": "bdev_wait_for_examine" 00:07:42.474 } 00:07:42.474 ] 00:07:42.474 } 00:07:42.474 ] 00:07:42.474 } 00:07:42.474 [2024-07-25 01:50:57.752801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.474 [2024-07-25 01:50:57.766418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.747 [2024-07-25 01:50:57.800718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.747 [2024-07-25 01:50:57.829078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.018  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:43.018 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:43.018 00:07:43.018 real 0m2.353s 00:07:43.018 user 0m1.721s 00:07:43.018 sys 0m0.603s 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:43.018 ************************************ 00:07:43.018 END TEST dd_offset_magic 00:07:43.018 ************************************ 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:43.018 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.018 [2024-07-25 01:50:58.199093] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:43.018 [2024-07-25 01:50:58.199187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76128 ] 00:07:43.018 { 00:07:43.018 "subsystems": [ 00:07:43.018 { 00:07:43.018 "subsystem": "bdev", 00:07:43.018 "config": [ 00:07:43.018 { 00:07:43.018 "params": { 00:07:43.018 "trtype": "pcie", 00:07:43.018 "traddr": "0000:00:10.0", 00:07:43.018 "name": "Nvme0" 00:07:43.018 }, 00:07:43.018 "method": "bdev_nvme_attach_controller" 00:07:43.018 }, 00:07:43.018 { 00:07:43.018 "params": { 00:07:43.018 "trtype": "pcie", 00:07:43.018 "traddr": "0000:00:11.0", 00:07:43.018 "name": "Nvme1" 00:07:43.018 }, 00:07:43.018 "method": "bdev_nvme_attach_controller" 00:07:43.018 }, 00:07:43.018 { 00:07:43.018 "method": "bdev_wait_for_examine" 00:07:43.018 } 00:07:43.018 ] 00:07:43.018 } 00:07:43.018 ] 00:07:43.018 } 00:07:43.018 [2024-07-25 01:50:58.314553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.277 [2024-07-25 01:50:58.330492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.277 [2024-07-25 01:50:58.365176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.277 [2024-07-25 01:50:58.393434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.537  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:43.537 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:43.537 01:50:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.537 [2024-07-25 01:50:58.731248] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:43.537 [2024-07-25 01:50:58.731393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:07:43.537 { 00:07:43.537 "subsystems": [ 00:07:43.537 { 00:07:43.537 "subsystem": "bdev", 00:07:43.537 "config": [ 00:07:43.537 { 00:07:43.537 "params": { 00:07:43.537 "trtype": "pcie", 00:07:43.537 "traddr": "0000:00:10.0", 00:07:43.537 "name": "Nvme0" 00:07:43.537 }, 00:07:43.537 "method": "bdev_nvme_attach_controller" 00:07:43.537 }, 00:07:43.537 { 00:07:43.537 "params": { 00:07:43.537 "trtype": "pcie", 00:07:43.537 "traddr": "0000:00:11.0", 00:07:43.537 "name": "Nvme1" 00:07:43.537 }, 00:07:43.537 "method": "bdev_nvme_attach_controller" 00:07:43.537 }, 00:07:43.537 { 00:07:43.537 "method": "bdev_wait_for_examine" 00:07:43.537 } 00:07:43.537 ] 00:07:43.537 } 00:07:43.537 ] 00:07:43.537 } 00:07:43.796 [2024-07-25 01:50:58.856781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.796 [2024-07-25 01:50:58.866399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.796 [2024-07-25 01:50:58.899293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.796 [2024-07-25 01:50:58.927630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.055  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:44.055 00:07:44.055 01:50:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:44.055 00:07:44.055 real 0m5.863s 00:07:44.055 user 0m4.419s 00:07:44.055 sys 0m2.714s 00:07:44.055 01:50:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.055 01:50:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.055 ************************************ 00:07:44.055 END TEST spdk_dd_bdev_to_bdev 00:07:44.055 ************************************ 00:07:44.055 01:50:59 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:44.055 01:50:59 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:44.055 01:50:59 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.055 01:50:59 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.055 01:50:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:44.055 ************************************ 00:07:44.055 START TEST spdk_dd_uring 00:07:44.055 ************************************ 00:07:44.055 01:50:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:44.315 * Looking for test storage... 00:07:44.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:44.315 ************************************ 00:07:44.315 START TEST dd_uring_copy 00:07:44.315 ************************************ 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:44.315 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.316 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=063ysi3uutmafnbk6wjsy85zt7vxco4whjrk0c7wzqd24baxrk16bkgi0m76xh6i7wn3dlsejketx6xgpbfbw3u74ulhr0shnjnapo54fnpk6hjjykxmotmg75ck1u9rtd37ubc5un3bembdt308w5f5d8e2g3cdkq2dwc64jrhy9n2cg15xwwtlk6yi5562p8pqlygkiwonc96hr83dbhmb7frwrgo806k2gjlft6a32kt7ddjzpyrmq4aw4tkzqcjbqt24i64da9wikrbum08y28ishppbe6r7xd50nprc2nt24jb4pyggylitn7kh86z9deupmhzdp7lh7oi4jumrpp38grz1ys66ysvlfsmi1db9prf9mac51qhbcujewj723jzbzrullpgnwlqo9iwh84ponsxcm7fv3k0s3riveusdqvnu3wuqrz4vr5hw0l1q0bch2p7ra4ltw6ljlgfolyobc2au61agpl4gu8pfy9h3xdk58wos2jpwc2vpop7wkykhc6rrmzikl8s8rfchumagoetdu7z8hyxuwpyl760q6s2ue33578cdwqherz3xulptefk47sn88rkvfzvdgb4w3jcs91490trvctvfp9iezcwaic5k984o0dcc13dxtihf5l4wksfw0buuuyhqx9mirmm1rne4bupro9enaxer11rrhuddzmfkzu6j7g3vs9gtwurlv1dog51756qmlwrwx7vto91tx0mhtbb8fb3hmmv3a5g5oif48ea7wjabnvpkjfkir038bc4olujjdsqpvz0s1sugxqlqs10mg0c0mb5mpbxavmmcid1d3twc33v0463dbwz7j1w7if8hiag5es2l4smk8jq0ob91uvqbbe49n22ap5xgfjlslxkearvaiiijelhqs2oi1wkb8y7uqsjgimy0g1zjgm6f5fu5i8l3kd3ttqp5ca2b9nqmsiyfp4l0zp20wpagb8kfxc8nmmd4fd6h0xlioseal9bk7211fhqifuf3ina3 00:07:44.316 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 063ysi3uutmafnbk6wjsy85zt7vxco4whjrk0c7wzqd24baxrk16bkgi0m76xh6i7wn3dlsejketx6xgpbfbw3u74ulhr0shnjnapo54fnpk6hjjykxmotmg75ck1u9rtd37ubc5un3bembdt308w5f5d8e2g3cdkq2dwc64jrhy9n2cg15xwwtlk6yi5562p8pqlygkiwonc96hr83dbhmb7frwrgo806k2gjlft6a32kt7ddjzpyrmq4aw4tkzqcjbqt24i64da9wikrbum08y28ishppbe6r7xd50nprc2nt24jb4pyggylitn7kh86z9deupmhzdp7lh7oi4jumrpp38grz1ys66ysvlfsmi1db9prf9mac51qhbcujewj723jzbzrullpgnwlqo9iwh84ponsxcm7fv3k0s3riveusdqvnu3wuqrz4vr5hw0l1q0bch2p7ra4ltw6ljlgfolyobc2au61agpl4gu8pfy9h3xdk58wos2jpwc2vpop7wkykhc6rrmzikl8s8rfchumagoetdu7z8hyxuwpyl760q6s2ue33578cdwqherz3xulptefk47sn88rkvfzvdgb4w3jcs91490trvctvfp9iezcwaic5k984o0dcc13dxtihf5l4wksfw0buuuyhqx9mirmm1rne4bupro9enaxer11rrhuddzmfkzu6j7g3vs9gtwurlv1dog51756qmlwrwx7vto91tx0mhtbb8fb3hmmv3a5g5oif48ea7wjabnvpkjfkir038bc4olujjdsqpvz0s1sugxqlqs10mg0c0mb5mpbxavmmcid1d3twc33v0463dbwz7j1w7if8hiag5es2l4smk8jq0ob91uvqbbe49n22ap5xgfjlslxkearvaiiijelhqs2oi1wkb8y7uqsjgimy0g1zjgm6f5fu5i8l3kd3ttqp5ca2b9nqmsiyfp4l0zp20wpagb8kfxc8nmmd4fd6h0xlioseal9bk7211fhqifuf3ina3 00:07:44.316 01:50:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:44.316 [2024-07-25 01:50:59.466752] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:44.316 [2024-07-25 01:50:59.466874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76219 ] 00:07:44.316 [2024-07-25 01:50:59.587485] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.316 [2024-07-25 01:50:59.607993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.574 [2024-07-25 01:50:59.650901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.574 [2024-07-25 01:50:59.684172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.141  Copying: 511/511 [MB] (average 1467 MBps) 00:07:45.141 00:07:45.141 01:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:45.141 01:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:45.141 01:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:45.141 01:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 [2024-07-25 01:51:00.484807] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:45.400 [2024-07-25 01:51:00.484907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76239 ] 00:07:45.400 { 00:07:45.400 "subsystems": [ 00:07:45.400 { 00:07:45.400 "subsystem": "bdev", 00:07:45.400 "config": [ 00:07:45.400 { 00:07:45.400 "params": { 00:07:45.400 "block_size": 512, 00:07:45.400 "num_blocks": 1048576, 00:07:45.400 "name": "malloc0" 00:07:45.400 }, 00:07:45.400 "method": "bdev_malloc_create" 00:07:45.400 }, 00:07:45.400 { 00:07:45.400 "params": { 00:07:45.400 "filename": "/dev/zram1", 00:07:45.400 "name": "uring0" 00:07:45.400 }, 00:07:45.400 "method": "bdev_uring_create" 00:07:45.400 }, 00:07:45.400 { 00:07:45.400 "method": "bdev_wait_for_examine" 00:07:45.400 } 00:07:45.400 ] 00:07:45.400 } 00:07:45.400 ] 00:07:45.400 } 00:07:45.400 [2024-07-25 01:51:00.605678] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.658 [2024-07-25 01:51:00.830300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.658 [2024-07-25 01:51:00.875530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.658 [2024-07-25 01:51:00.908574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.227  Copying: 219/512 [MB] (219 MBps) Copying: 450/512 [MB] (230 MBps) Copying: 512/512 [MB] (average 226 MBps) 00:07:48.227 00:07:48.227 01:51:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:48.227 01:51:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:48.227 01:51:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:48.227 01:51:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.484 [2024-07-25 01:51:03.562928] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:48.485 [2024-07-25 01:51:03.563005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76280 ] 00:07:48.485 { 00:07:48.485 "subsystems": [ 00:07:48.485 { 00:07:48.485 "subsystem": "bdev", 00:07:48.485 "config": [ 00:07:48.485 { 00:07:48.485 "params": { 00:07:48.485 "block_size": 512, 00:07:48.485 "num_blocks": 1048576, 00:07:48.485 "name": "malloc0" 00:07:48.485 }, 00:07:48.485 "method": "bdev_malloc_create" 00:07:48.485 }, 00:07:48.485 { 00:07:48.485 "params": { 00:07:48.485 "filename": "/dev/zram1", 00:07:48.485 "name": "uring0" 00:07:48.485 }, 00:07:48.485 "method": "bdev_uring_create" 00:07:48.485 }, 00:07:48.485 { 00:07:48.485 "method": "bdev_wait_for_examine" 00:07:48.485 } 00:07:48.485 ] 00:07:48.485 } 00:07:48.485 ] 00:07:48.485 } 00:07:48.485 [2024-07-25 01:51:03.679729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.485 [2024-07-25 01:51:03.700042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.485 [2024-07-25 01:51:03.741528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.485 [2024-07-25 01:51:03.774938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.624  Copying: 192/512 [MB] (192 MBps) Copying: 390/512 [MB] (198 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:07:51.624 00:07:51.624 01:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:51.625 01:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 063ysi3uutmafnbk6wjsy85zt7vxco4whjrk0c7wzqd24baxrk16bkgi0m76xh6i7wn3dlsejketx6xgpbfbw3u74ulhr0shnjnapo54fnpk6hjjykxmotmg75ck1u9rtd37ubc5un3bembdt308w5f5d8e2g3cdkq2dwc64jrhy9n2cg15xwwtlk6yi5562p8pqlygkiwonc96hr83dbhmb7frwrgo806k2gjlft6a32kt7ddjzpyrmq4aw4tkzqcjbqt24i64da9wikrbum08y28ishppbe6r7xd50nprc2nt24jb4pyggylitn7kh86z9deupmhzdp7lh7oi4jumrpp38grz1ys66ysvlfsmi1db9prf9mac51qhbcujewj723jzbzrullpgnwlqo9iwh84ponsxcm7fv3k0s3riveusdqvnu3wuqrz4vr5hw0l1q0bch2p7ra4ltw6ljlgfolyobc2au61agpl4gu8pfy9h3xdk58wos2jpwc2vpop7wkykhc6rrmzikl8s8rfchumagoetdu7z8hyxuwpyl760q6s2ue33578cdwqherz3xulptefk47sn88rkvfzvdgb4w3jcs91490trvctvfp9iezcwaic5k984o0dcc13dxtihf5l4wksfw0buuuyhqx9mirmm1rne4bupro9enaxer11rrhuddzmfkzu6j7g3vs9gtwurlv1dog51756qmlwrwx7vto91tx0mhtbb8fb3hmmv3a5g5oif48ea7wjabnvpkjfkir038bc4olujjdsqpvz0s1sugxqlqs10mg0c0mb5mpbxavmmcid1d3twc33v0463dbwz7j1w7if8hiag5es2l4smk8jq0ob91uvqbbe49n22ap5xgfjlslxkearvaiiijelhqs2oi1wkb8y7uqsjgimy0g1zjgm6f5fu5i8l3kd3ttqp5ca2b9nqmsiyfp4l0zp20wpagb8kfxc8nmmd4fd6h0xlioseal9bk7211fhqifuf3ina3 == \0\6\3\y\s\i\3\u\u\t\m\a\f\n\b\k\6\w\j\s\y\8\5\z\t\7\v\x\c\o\4\w\h\j\r\k\0\c\7\w\z\q\d\2\4\b\a\x\r\k\1\6\b\k\g\i\0\m\7\6\x\h\6\i\7\w\n\3\d\l\s\e\j\k\e\t\x\6\x\g\p\b\f\b\w\3\u\7\4\u\l\h\r\0\s\h\n\j\n\a\p\o\5\4\f\n\p\k\6\h\j\j\y\k\x\m\o\t\m\g\7\5\c\k\1\u\9\r\t\d\3\7\u\b\c\5\u\n\3\b\e\m\b\d\t\3\0\8\w\5\f\5\d\8\e\2\g\3\c\d\k\q\2\d\w\c\6\4\j\r\h\y\9\n\2\c\g\1\5\x\w\w\t\l\k\6\y\i\5\5\6\2\p\8\p\q\l\y\g\k\i\w\o\n\c\9\6\h\r\8\3\d\b\h\m\b\7\f\r\w\r\g\o\8\0\6\k\2\g\j\l\f\t\6\a\3\2\k\t\7\d\d\j\z\p\y\r\m\q\4\a\w\4\t\k\z\q\c\j\b\q\t\2\4\i\6\4\d\a\9\w\i\k\r\b\u\m\0\8\y\2\8\i\s\h\p\p\b\e\6\r\7\x\d\5\0\n\p\r\c\2\n\t\2\4\j\b\4\p\y\g\g\y\l\i\t\n\7\k\h\8\6\z\9\d\e\u\p\m\h\z\d\p\7\l\h\7\o\i\4\j\u\m\r\p\p\3\8\g\r\z\1\y\s\6\6\y\s\v\l\f\s\m\i\1\d\b\9\p\r\f\9\m\a\c\5\1\q\h\b\c\u\j\e\w\j\7\2\3\j\z\b\z\r\u\l\l\p\g\n\w\l\q\o\9\i\w\h\8\4\p\o\n\s\x\c\m\7\f\v\3\k\0\s\3\r\i\v\e\u\s\d\q\v\n\u\3\w\u\q\r\z\4\v\r\5\h\w\0\l\1\q\0\b\c\h\2\p\7\r\a\4\l\t\w\6\l\j\l\g\f\o\l\y\o\b\c\2\a\u\6\1\a\g\p\l\4\g\u\8\p\f\y\9\h\3\x\d\k\5\8\w\o\s\2\j\p\w\c\2\v\p\o\p\7\w\k\y\k\h\c\6\r\r\m\z\i\k\l\8\s\8\r\f\c\h\u\m\a\g\o\e\t\d\u\7\z\8\h\y\x\u\w\p\y\l\7\6\0\q\6\s\2\u\e\3\3\5\7\8\c\d\w\q\h\e\r\z\3\x\u\l\p\t\e\f\k\4\7\s\n\8\8\r\k\v\f\z\v\d\g\b\4\w\3\j\c\s\9\1\4\9\0\t\r\v\c\t\v\f\p\9\i\e\z\c\w\a\i\c\5\k\9\8\4\o\0\d\c\c\1\3\d\x\t\i\h\f\5\l\4\w\k\s\f\w\0\b\u\u\u\y\h\q\x\9\m\i\r\m\m\1\r\n\e\4\b\u\p\r\o\9\e\n\a\x\e\r\1\1\r\r\h\u\d\d\z\m\f\k\z\u\6\j\7\g\3\v\s\9\g\t\w\u\r\l\v\1\d\o\g\5\1\7\5\6\q\m\l\w\r\w\x\7\v\t\o\9\1\t\x\0\m\h\t\b\b\8\f\b\3\h\m\m\v\3\a\5\g\5\o\i\f\4\8\e\a\7\w\j\a\b\n\v\p\k\j\f\k\i\r\0\3\8\b\c\4\o\l\u\j\j\d\s\q\p\v\z\0\s\1\s\u\g\x\q\l\q\s\1\0\m\g\0\c\0\m\b\5\m\p\b\x\a\v\m\m\c\i\d\1\d\3\t\w\c\3\3\v\0\4\6\3\d\b\w\z\7\j\1\w\7\i\f\8\h\i\a\g\5\e\s\2\l\4\s\m\k\8\j\q\0\o\b\9\1\u\v\q\b\b\e\4\9\n\2\2\a\p\5\x\g\f\j\l\s\l\x\k\e\a\r\v\a\i\i\i\j\e\l\h\q\s\2\o\i\1\w\k\b\8\y\7\u\q\s\j\g\i\m\y\0\g\1\z\j\g\m\6\f\5\f\u\5\i\8\l\3\k\d\3\t\t\q\p\5\c\a\2\b\9\n\q\m\s\i\y\f\p\4\l\0\z\p\2\0\w\p\a\g\b\8\k\f\x\c\8\n\m\m\d\4\f\d\6\h\0\x\l\i\o\s\e\a\l\9\b\k\7\2\1\1\f\h\q\i\f\u\f\3\i\n\a\3 ]] 00:07:51.625 01:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:51.625 01:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 063ysi3uutmafnbk6wjsy85zt7vxco4whjrk0c7wzqd24baxrk16bkgi0m76xh6i7wn3dlsejketx6xgpbfbw3u74ulhr0shnjnapo54fnpk6hjjykxmotmg75ck1u9rtd37ubc5un3bembdt308w5f5d8e2g3cdkq2dwc64jrhy9n2cg15xwwtlk6yi5562p8pqlygkiwonc96hr83dbhmb7frwrgo806k2gjlft6a32kt7ddjzpyrmq4aw4tkzqcjbqt24i64da9wikrbum08y28ishppbe6r7xd50nprc2nt24jb4pyggylitn7kh86z9deupmhzdp7lh7oi4jumrpp38grz1ys66ysvlfsmi1db9prf9mac51qhbcujewj723jzbzrullpgnwlqo9iwh84ponsxcm7fv3k0s3riveusdqvnu3wuqrz4vr5hw0l1q0bch2p7ra4ltw6ljlgfolyobc2au61agpl4gu8pfy9h3xdk58wos2jpwc2vpop7wkykhc6rrmzikl8s8rfchumagoetdu7z8hyxuwpyl760q6s2ue33578cdwqherz3xulptefk47sn88rkvfzvdgb4w3jcs91490trvctvfp9iezcwaic5k984o0dcc13dxtihf5l4wksfw0buuuyhqx9mirmm1rne4bupro9enaxer11rrhuddzmfkzu6j7g3vs9gtwurlv1dog51756qmlwrwx7vto91tx0mhtbb8fb3hmmv3a5g5oif48ea7wjabnvpkjfkir038bc4olujjdsqpvz0s1sugxqlqs10mg0c0mb5mpbxavmmcid1d3twc33v0463dbwz7j1w7if8hiag5es2l4smk8jq0ob91uvqbbe49n22ap5xgfjlslxkearvaiiijelhqs2oi1wkb8y7uqsjgimy0g1zjgm6f5fu5i8l3kd3ttqp5ca2b9nqmsiyfp4l0zp20wpagb8kfxc8nmmd4fd6h0xlioseal9bk7211fhqifuf3ina3 == \0\6\3\y\s\i\3\u\u\t\m\a\f\n\b\k\6\w\j\s\y\8\5\z\t\7\v\x\c\o\4\w\h\j\r\k\0\c\7\w\z\q\d\2\4\b\a\x\r\k\1\6\b\k\g\i\0\m\7\6\x\h\6\i\7\w\n\3\d\l\s\e\j\k\e\t\x\6\x\g\p\b\f\b\w\3\u\7\4\u\l\h\r\0\s\h\n\j\n\a\p\o\5\4\f\n\p\k\6\h\j\j\y\k\x\m\o\t\m\g\7\5\c\k\1\u\9\r\t\d\3\7\u\b\c\5\u\n\3\b\e\m\b\d\t\3\0\8\w\5\f\5\d\8\e\2\g\3\c\d\k\q\2\d\w\c\6\4\j\r\h\y\9\n\2\c\g\1\5\x\w\w\t\l\k\6\y\i\5\5\6\2\p\8\p\q\l\y\g\k\i\w\o\n\c\9\6\h\r\8\3\d\b\h\m\b\7\f\r\w\r\g\o\8\0\6\k\2\g\j\l\f\t\6\a\3\2\k\t\7\d\d\j\z\p\y\r\m\q\4\a\w\4\t\k\z\q\c\j\b\q\t\2\4\i\6\4\d\a\9\w\i\k\r\b\u\m\0\8\y\2\8\i\s\h\p\p\b\e\6\r\7\x\d\5\0\n\p\r\c\2\n\t\2\4\j\b\4\p\y\g\g\y\l\i\t\n\7\k\h\8\6\z\9\d\e\u\p\m\h\z\d\p\7\l\h\7\o\i\4\j\u\m\r\p\p\3\8\g\r\z\1\y\s\6\6\y\s\v\l\f\s\m\i\1\d\b\9\p\r\f\9\m\a\c\5\1\q\h\b\c\u\j\e\w\j\7\2\3\j\z\b\z\r\u\l\l\p\g\n\w\l\q\o\9\i\w\h\8\4\p\o\n\s\x\c\m\7\f\v\3\k\0\s\3\r\i\v\e\u\s\d\q\v\n\u\3\w\u\q\r\z\4\v\r\5\h\w\0\l\1\q\0\b\c\h\2\p\7\r\a\4\l\t\w\6\l\j\l\g\f\o\l\y\o\b\c\2\a\u\6\1\a\g\p\l\4\g\u\8\p\f\y\9\h\3\x\d\k\5\8\w\o\s\2\j\p\w\c\2\v\p\o\p\7\w\k\y\k\h\c\6\r\r\m\z\i\k\l\8\s\8\r\f\c\h\u\m\a\g\o\e\t\d\u\7\z\8\h\y\x\u\w\p\y\l\7\6\0\q\6\s\2\u\e\3\3\5\7\8\c\d\w\q\h\e\r\z\3\x\u\l\p\t\e\f\k\4\7\s\n\8\8\r\k\v\f\z\v\d\g\b\4\w\3\j\c\s\9\1\4\9\0\t\r\v\c\t\v\f\p\9\i\e\z\c\w\a\i\c\5\k\9\8\4\o\0\d\c\c\1\3\d\x\t\i\h\f\5\l\4\w\k\s\f\w\0\b\u\u\u\y\h\q\x\9\m\i\r\m\m\1\r\n\e\4\b\u\p\r\o\9\e\n\a\x\e\r\1\1\r\r\h\u\d\d\z\m\f\k\z\u\6\j\7\g\3\v\s\9\g\t\w\u\r\l\v\1\d\o\g\5\1\7\5\6\q\m\l\w\r\w\x\7\v\t\o\9\1\t\x\0\m\h\t\b\b\8\f\b\3\h\m\m\v\3\a\5\g\5\o\i\f\4\8\e\a\7\w\j\a\b\n\v\p\k\j\f\k\i\r\0\3\8\b\c\4\o\l\u\j\j\d\s\q\p\v\z\0\s\1\s\u\g\x\q\l\q\s\1\0\m\g\0\c\0\m\b\5\m\p\b\x\a\v\m\m\c\i\d\1\d\3\t\w\c\3\3\v\0\4\6\3\d\b\w\z\7\j\1\w\7\i\f\8\h\i\a\g\5\e\s\2\l\4\s\m\k\8\j\q\0\o\b\9\1\u\v\q\b\b\e\4\9\n\2\2\a\p\5\x\g\f\j\l\s\l\x\k\e\a\r\v\a\i\i\i\j\e\l\h\q\s\2\o\i\1\w\k\b\8\y\7\u\q\s\j\g\i\m\y\0\g\1\z\j\g\m\6\f\5\f\u\5\i\8\l\3\k\d\3\t\t\q\p\5\c\a\2\b\9\n\q\m\s\i\y\f\p\4\l\0\z\p\2\0\w\p\a\g\b\8\k\f\x\c\8\n\m\m\d\4\f\d\6\h\0\x\l\i\o\s\e\a\l\9\b\k\7\2\1\1\f\h\q\i\f\u\f\3\i\n\a\3 ]] 00:07:51.625 01:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:51.884 01:51:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:51.884 01:51:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:51.884 01:51:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:51.884 01:51:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:51.884 [2024-07-25 01:51:07.091195] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:51.884 [2024-07-25 01:51:07.091956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76334 ] 00:07:51.884 { 00:07:51.884 "subsystems": [ 00:07:51.884 { 00:07:51.884 "subsystem": "bdev", 00:07:51.884 "config": [ 00:07:51.884 { 00:07:51.884 "params": { 00:07:51.884 "block_size": 512, 00:07:51.884 "num_blocks": 1048576, 00:07:51.884 "name": "malloc0" 00:07:51.884 }, 00:07:51.884 "method": "bdev_malloc_create" 00:07:51.884 }, 00:07:51.884 { 00:07:51.884 "params": { 00:07:51.884 "filename": "/dev/zram1", 00:07:51.884 "name": "uring0" 00:07:51.884 }, 00:07:51.884 "method": "bdev_uring_create" 00:07:51.884 }, 00:07:51.884 { 00:07:51.884 "method": "bdev_wait_for_examine" 00:07:51.884 } 00:07:51.884 ] 00:07:51.884 } 00:07:51.884 ] 00:07:51.884 } 00:07:52.143 [2024-07-25 01:51:07.215476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.143 [2024-07-25 01:51:07.231743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.143 [2024-07-25 01:51:07.263746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.143 [2024-07-25 01:51:07.290969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.316  Copying: 181/512 [MB] (181 MBps) Copying: 363/512 [MB] (181 MBps) Copying: 512/512 [MB] (average 180 MBps) 00:07:55.316 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:55.316 01:51:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.316 [2024-07-25 01:51:10.501162] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:55.316 [2024-07-25 01:51:10.501251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:07:55.316 { 00:07:55.316 "subsystems": [ 00:07:55.316 { 00:07:55.316 "subsystem": "bdev", 00:07:55.316 "config": [ 00:07:55.316 { 00:07:55.316 "params": { 00:07:55.316 "block_size": 512, 00:07:55.316 "num_blocks": 1048576, 00:07:55.316 "name": "malloc0" 00:07:55.316 }, 00:07:55.316 "method": "bdev_malloc_create" 00:07:55.316 }, 00:07:55.316 { 00:07:55.316 "params": { 00:07:55.316 "filename": "/dev/zram1", 00:07:55.316 "name": "uring0" 00:07:55.316 }, 00:07:55.316 "method": "bdev_uring_create" 00:07:55.316 }, 00:07:55.316 { 00:07:55.316 "params": { 00:07:55.316 "name": "uring0" 00:07:55.316 }, 00:07:55.316 "method": "bdev_uring_delete" 00:07:55.316 }, 00:07:55.316 { 00:07:55.316 "method": "bdev_wait_for_examine" 00:07:55.316 } 00:07:55.316 ] 00:07:55.316 } 00:07:55.316 ] 00:07:55.316 } 00:07:55.575 [2024-07-25 01:51:10.621698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.575 [2024-07-25 01:51:10.638715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.575 [2024-07-25 01:51:10.670451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.576 [2024-07-25 01:51:10.701328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.834  Copying: 0/0 [B] (average 0 Bps) 00:07:55.834 00:07:55.834 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:55.834 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.835 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:55.835 [2024-07-25 01:51:11.112406] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:55.835 [2024-07-25 01:51:11.112495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76408 ] 00:07:55.835 { 00:07:55.835 "subsystems": [ 00:07:55.835 { 00:07:55.835 "subsystem": "bdev", 00:07:55.835 "config": [ 00:07:55.835 { 00:07:55.835 "params": { 00:07:55.835 "block_size": 512, 00:07:55.835 "num_blocks": 1048576, 00:07:55.835 "name": "malloc0" 00:07:55.835 }, 00:07:55.835 "method": "bdev_malloc_create" 00:07:55.835 }, 00:07:55.835 { 00:07:55.835 "params": { 00:07:55.835 "filename": "/dev/zram1", 00:07:55.835 "name": "uring0" 00:07:55.835 }, 00:07:55.835 "method": "bdev_uring_create" 00:07:55.835 }, 00:07:55.835 { 00:07:55.835 "params": { 00:07:55.835 "name": "uring0" 00:07:55.835 }, 00:07:55.835 "method": "bdev_uring_delete" 00:07:55.835 }, 00:07:55.835 { 00:07:55.835 "method": "bdev_wait_for_examine" 00:07:55.835 } 00:07:55.835 ] 00:07:55.835 } 00:07:55.835 ] 00:07:55.835 } 00:07:56.094 [2024-07-25 01:51:11.233395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.094 [2024-07-25 01:51:11.246851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.094 [2024-07-25 01:51:11.278417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.094 [2024-07-25 01:51:11.305731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.353 [2024-07-25 01:51:11.426815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:56.353 [2024-07-25 01:51:11.426890] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:56.353 [2024-07-25 01:51:11.426917] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:56.353 [2024-07-25 01:51:11.426926] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.353 [2024-07-25 01:51:11.580695] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:56.353 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:56.612 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:56.612 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:56.612 00:07:56.612 real 0m12.457s 00:07:56.612 user 0m8.339s 00:07:56.612 sys 0m10.438s 00:07:56.612 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.612 01:51:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.612 ************************************ 00:07:56.612 END TEST dd_uring_copy 00:07:56.612 ************************************ 00:07:56.612 00:07:56.612 real 0m12.603s 00:07:56.612 user 0m8.395s 00:07:56.612 sys 0m10.517s 00:07:56.612 01:51:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.612 01:51:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:56.612 ************************************ 00:07:56.612 END TEST spdk_dd_uring 00:07:56.612 ************************************ 00:07:56.870 01:51:11 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:56.870 01:51:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.870 01:51:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.871 01:51:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:56.871 ************************************ 00:07:56.871 START TEST spdk_dd_sparse 00:07:56.871 ************************************ 00:07:56.871 01:51:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:56.871 * Looking for test storage... 00:07:56.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:56.871 1+0 records in 00:07:56.871 1+0 records out 00:07:56.871 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00650654 s, 645 MB/s 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:56.871 1+0 records in 00:07:56.871 1+0 records out 00:07:56.871 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00633129 s, 662 MB/s 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:56.871 1+0 records in 00:07:56.871 1+0 records out 00:07:56.871 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00626311 s, 670 MB/s 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:56.871 ************************************ 00:07:56.871 START TEST dd_sparse_file_to_file 00:07:56.871 ************************************ 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:56.871 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:56.871 [2024-07-25 01:51:12.117411] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:56.871 [2024-07-25 01:51:12.117502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76496 ] 00:07:56.871 { 00:07:56.871 "subsystems": [ 00:07:56.871 { 00:07:56.871 "subsystem": "bdev", 00:07:56.871 "config": [ 00:07:56.871 { 00:07:56.871 "params": { 00:07:56.871 "block_size": 4096, 00:07:56.871 "filename": "dd_sparse_aio_disk", 00:07:56.871 "name": "dd_aio" 00:07:56.871 }, 00:07:56.871 "method": "bdev_aio_create" 00:07:56.871 }, 00:07:56.871 { 00:07:56.871 "params": { 00:07:56.871 "lvs_name": "dd_lvstore", 00:07:56.871 "bdev_name": "dd_aio" 00:07:56.871 }, 00:07:56.871 "method": "bdev_lvol_create_lvstore" 00:07:56.871 }, 00:07:56.871 { 00:07:56.871 "method": "bdev_wait_for_examine" 00:07:56.871 } 00:07:56.871 ] 00:07:56.871 } 00:07:56.871 ] 00:07:56.871 } 00:07:57.130 [2024-07-25 01:51:12.237649] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.130 [2024-07-25 01:51:12.255616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.130 [2024-07-25 01:51:12.295545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.130 [2024-07-25 01:51:12.327630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.389  Copying: 12/36 [MB] (average 1090 MBps) 00:07:57.389 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:57.389 00:07:57.389 real 0m0.530s 00:07:57.389 user 0m0.311s 00:07:57.389 sys 0m0.255s 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:57.389 ************************************ 00:07:57.389 END TEST dd_sparse_file_to_file 00:07:57.389 ************************************ 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:57.389 ************************************ 00:07:57.389 START TEST dd_sparse_file_to_bdev 00:07:57.389 ************************************ 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:57.389 01:51:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.648 [2024-07-25 01:51:12.688427] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:57.648 [2024-07-25 01:51:12.688524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76539 ] 00:07:57.648 { 00:07:57.648 "subsystems": [ 00:07:57.648 { 00:07:57.648 "subsystem": "bdev", 00:07:57.648 "config": [ 00:07:57.648 { 00:07:57.648 "params": { 00:07:57.648 "block_size": 4096, 00:07:57.648 "filename": "dd_sparse_aio_disk", 00:07:57.648 "name": "dd_aio" 00:07:57.648 }, 00:07:57.648 "method": "bdev_aio_create" 00:07:57.648 }, 00:07:57.648 { 00:07:57.648 "params": { 00:07:57.648 "lvs_name": "dd_lvstore", 00:07:57.648 "lvol_name": "dd_lvol", 00:07:57.648 "size_in_mib": 36, 00:07:57.648 "thin_provision": true 00:07:57.648 }, 00:07:57.648 "method": "bdev_lvol_create" 00:07:57.648 }, 00:07:57.648 { 00:07:57.648 "method": "bdev_wait_for_examine" 00:07:57.648 } 00:07:57.648 ] 00:07:57.648 } 00:07:57.648 ] 00:07:57.648 } 00:07:57.648 [2024-07-25 01:51:12.809307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.648 [2024-07-25 01:51:12.830335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.648 [2024-07-25 01:51:12.870750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.648 [2024-07-25 01:51:12.903076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.907  Copying: 12/36 [MB] (average 521 MBps) 00:07:57.907 00:07:57.907 00:07:57.907 real 0m0.503s 00:07:57.907 user 0m0.319s 00:07:57.907 sys 0m0.247s 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.907 ************************************ 00:07:57.907 END TEST dd_sparse_file_to_bdev 00:07:57.907 ************************************ 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:57.907 ************************************ 00:07:57.907 START TEST dd_sparse_bdev_to_file 00:07:57.907 ************************************ 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:57.907 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:58.165 [2024-07-25 01:51:13.245816] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:58.165 [2024-07-25 01:51:13.245914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76571 ] 00:07:58.165 { 00:07:58.165 "subsystems": [ 00:07:58.165 { 00:07:58.165 "subsystem": "bdev", 00:07:58.165 "config": [ 00:07:58.165 { 00:07:58.165 "params": { 00:07:58.165 "block_size": 4096, 00:07:58.165 "filename": "dd_sparse_aio_disk", 00:07:58.165 "name": "dd_aio" 00:07:58.165 }, 00:07:58.165 "method": "bdev_aio_create" 00:07:58.165 }, 00:07:58.165 { 00:07:58.165 "method": "bdev_wait_for_examine" 00:07:58.165 } 00:07:58.165 ] 00:07:58.165 } 00:07:58.165 ] 00:07:58.165 } 00:07:58.165 [2024-07-25 01:51:13.366594] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.165 [2024-07-25 01:51:13.385332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.165 [2024-07-25 01:51:13.425603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.165 [2024-07-25 01:51:13.458306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.423  Copying: 12/36 [MB] (average 1200 MBps) 00:07:58.423 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:58.423 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:58.423 00:07:58.423 real 0m0.513s 00:07:58.423 user 0m0.312s 00:07:58.423 sys 0m0.248s 00:07:58.424 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.424 ************************************ 00:07:58.424 END TEST dd_sparse_bdev_to_file 00:07:58.424 01:51:13 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:58.424 ************************************ 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:58.683 00:07:58.683 real 0m1.833s 00:07:58.683 user 0m1.041s 00:07:58.683 sys 0m0.934s 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.683 ************************************ 00:07:58.683 END TEST spdk_dd_sparse 00:07:58.683 ************************************ 00:07:58.683 01:51:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:58.683 01:51:13 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:58.683 01:51:13 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.683 01:51:13 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.683 01:51:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:58.683 ************************************ 00:07:58.683 START TEST spdk_dd_negative 00:07:58.683 ************************************ 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:58.683 * Looking for test storage... 00:07:58.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.683 01:51:13 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.684 ************************************ 00:07:58.684 START TEST dd_invalid_arguments 00:07:58.684 ************************************ 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.684 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:58.684 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:58.684 00:07:58.684 CPU options: 00:07:58.684 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:58.684 (like [0,1,10]) 00:07:58.684 --lcores lcore to CPU mapping list. The list is in the format: 00:07:58.684 [<,lcores[@CPUs]>...] 00:07:58.684 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:58.684 Within the group, '-' is used for range separator, 00:07:58.684 ',' is used for single number separator. 00:07:58.684 '( )' can be omitted for single element group, 00:07:58.684 '@' can be omitted if cpus and lcores have the same value 00:07:58.684 --disable-cpumask-locks Disable CPU core lock files. 00:07:58.684 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:58.684 pollers in the app support interrupt mode) 00:07:58.684 -p, --main-core main (primary) core for DPDK 00:07:58.684 00:07:58.684 Configuration options: 00:07:58.684 -c, --config, --json JSON config file 00:07:58.684 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:58.684 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:58.684 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:58.684 --rpcs-allowed comma-separated list of permitted RPCS 00:07:58.684 --json-ignore-init-errors don't exit on invalid config entry 00:07:58.684 00:07:58.684 Memory options: 00:07:58.684 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:58.684 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:58.684 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:58.684 -R, --huge-unlink unlink huge files after initialization 00:07:58.684 -n, --mem-channels number of memory channels used for DPDK 00:07:58.684 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:58.684 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:58.684 --no-huge run without using hugepages 00:07:58.684 -i, --shm-id shared memory ID (optional) 00:07:58.684 -g, --single-file-segments force creating just one hugetlbfs file 00:07:58.684 00:07:58.684 PCI options: 00:07:58.684 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:58.684 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:58.684 -u, --no-pci disable PCI access 00:07:58.684 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:58.684 00:07:58.684 Log options: 00:07:58.684 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:58.684 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:58.684 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:58.684 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:58.684 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:58.684 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:58.684 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:58.684 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:58.684 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:58.684 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:58.684 virtio_vfio_user, vmd) 00:07:58.684 --silence-noticelog disable notice level logging to stderr 00:07:58.684 00:07:58.684 Trace options: 00:07:58.684 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:58.684 setting 0 to disable trace (default 32768) 00:07:58.684 Tracepoints vary in size and can use more than one trace entry. 00:07:58.684 -e, --tpoint-group [:] 00:07:58.684 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:58.684 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:58.684 [2024-07-25 01:51:13.979261] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:58.944 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:58.944 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:58.944 a tracepoint group. First tpoint inside a group can be enabled by 00:07:58.944 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:58.944 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:58.944 in /include/spdk_internal/trace_defs.h 00:07:58.944 00:07:58.944 Other options: 00:07:58.944 -h, --help show this usage 00:07:58.944 -v, --version print SPDK version 00:07:58.944 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:58.944 --env-context Opaque context for use of the env implementation 00:07:58.944 00:07:58.944 Application specific: 00:07:58.944 [--------- DD Options ---------] 00:07:58.944 --if Input file. Must specify either --if or --ib. 00:07:58.944 --ib Input bdev. Must specifier either --if or --ib 00:07:58.944 --of Output file. Must specify either --of or --ob. 00:07:58.944 --ob Output bdev. Must specify either --of or --ob. 00:07:58.944 --iflag Input file flags. 00:07:58.944 --oflag Output file flags. 00:07:58.944 --bs I/O unit size (default: 4096) 00:07:58.944 --qd Queue depth (default: 2) 00:07:58.944 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:58.944 --skip Skip this many I/O units at start of input. (default: 0) 00:07:58.944 --seek Skip this many I/O units at start of output. (default: 0) 00:07:58.944 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:58.944 --sparse Enable hole skipping in input target 00:07:58.944 Available iflag and oflag values: 00:07:58.944 append - append mode 00:07:58.944 direct - use direct I/O for data 00:07:58.944 directory - fail unless a directory 00:07:58.944 dsync - use synchronized I/O for data 00:07:58.944 noatime - do not update access time 00:07:58.944 noctty - do not assign controlling terminal from file 00:07:58.944 nofollow - do not follow symlinks 00:07:58.944 nonblock - use non-blocking I/O 00:07:58.944 sync - use synchronized I/O for data and metadata 00:07:58.944 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:58.944 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.944 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.944 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.944 00:07:58.944 real 0m0.069s 00:07:58.944 user 0m0.040s 00:07:58.944 sys 0m0.028s 00:07:58.944 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.944 ************************************ 00:07:58.944 01:51:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:58.944 END TEST dd_invalid_arguments 00:07:58.944 ************************************ 00:07:58.944 01:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:58.944 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.944 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.945 ************************************ 00:07:58.945 START TEST dd_double_input 00:07:58.945 ************************************ 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:58.945 [2024-07-25 01:51:14.103272] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.945 00:07:58.945 real 0m0.072s 00:07:58.945 user 0m0.043s 00:07:58.945 sys 0m0.027s 00:07:58.945 ************************************ 00:07:58.945 END TEST dd_double_input 00:07:58.945 ************************************ 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.945 ************************************ 00:07:58.945 START TEST dd_double_output 00:07:58.945 ************************************ 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:58.945 [2024-07-25 01:51:14.218945] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.945 00:07:58.945 real 0m0.066s 00:07:58.945 user 0m0.049s 00:07:58.945 sys 0m0.016s 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.945 ************************************ 00:07:58.945 END TEST dd_double_output 00:07:58.945 ************************************ 00:07:58.945 01:51:14 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 ************************************ 00:07:59.205 START TEST dd_no_input 00:07:59.205 ************************************ 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:59.205 [2024-07-25 01:51:14.345278] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.205 00:07:59.205 real 0m0.073s 00:07:59.205 user 0m0.044s 00:07:59.205 sys 0m0.028s 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 ************************************ 00:07:59.205 END TEST dd_no_input 00:07:59.205 ************************************ 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 ************************************ 00:07:59.205 START TEST dd_no_output 00:07:59.205 ************************************ 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.205 [2024-07-25 01:51:14.467336] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.205 00:07:59.205 real 0m0.069s 00:07:59.205 user 0m0.044s 00:07:59.205 sys 0m0.024s 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.205 01:51:14 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 ************************************ 00:07:59.205 END TEST dd_no_output 00:07:59.205 ************************************ 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.465 ************************************ 00:07:59.465 START TEST dd_wrong_blocksize 00:07:59.465 ************************************ 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:59.465 [2024-07-25 01:51:14.590605] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.465 00:07:59.465 real 0m0.072s 00:07:59.465 user 0m0.047s 00:07:59.465 sys 0m0.024s 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:59.465 ************************************ 00:07:59.465 END TEST dd_wrong_blocksize 00:07:59.465 ************************************ 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.465 ************************************ 00:07:59.465 START TEST dd_smaller_blocksize 00:07:59.465 ************************************ 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.465 01:51:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:59.465 [2024-07-25 01:51:14.714618] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:07:59.465 [2024-07-25 01:51:14.714709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76789 ] 00:07:59.724 [2024-07-25 01:51:14.835106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.724 [2024-07-25 01:51:14.855414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.724 [2024-07-25 01:51:14.895603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.724 [2024-07-25 01:51:14.927239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.724 [2024-07-25 01:51:14.943204] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:59.724 [2024-07-25 01:51:14.943272] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.724 [2024-07-25 01:51:15.005970] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.984 00:07:59.984 real 0m0.419s 00:07:59.984 user 0m0.213s 00:07:59.984 sys 0m0.100s 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.984 ************************************ 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:59.984 END TEST dd_smaller_blocksize 00:07:59.984 ************************************ 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.984 ************************************ 00:07:59.984 START TEST dd_invalid_count 00:07:59.984 ************************************ 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:59.984 [2024-07-25 01:51:15.180771] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.984 00:07:59.984 real 0m0.071s 00:07:59.984 user 0m0.044s 00:07:59.984 sys 0m0.026s 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:59.984 ************************************ 00:07:59.984 END TEST dd_invalid_count 00:07:59.984 ************************************ 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:59.984 ************************************ 00:07:59.984 START TEST dd_invalid_oflag 00:07:59.984 ************************************ 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.984 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:00.244 [2024-07-25 01:51:15.298853] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.244 00:08:00.244 real 0m0.070s 00:08:00.244 user 0m0.048s 00:08:00.244 sys 0m0.022s 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 ************************************ 00:08:00.244 END TEST dd_invalid_oflag 00:08:00.244 ************************************ 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 ************************************ 00:08:00.244 START TEST dd_invalid_iflag 00:08:00.244 ************************************ 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:00.244 [2024-07-25 01:51:15.418982] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.244 00:08:00.244 real 0m0.070s 00:08:00.244 user 0m0.049s 00:08:00.244 sys 0m0.020s 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 ************************************ 00:08:00.244 END TEST dd_invalid_iflag 00:08:00.244 ************************************ 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 ************************************ 00:08:00.244 START TEST dd_unknown_flag 00:08:00.244 ************************************ 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:00.244 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:00.244 [2024-07-25 01:51:15.540389] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:00.244 [2024-07-25 01:51:15.540486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76876 ] 00:08:00.504 [2024-07-25 01:51:15.660966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.504 [2024-07-25 01:51:15.679058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.504 [2024-07-25 01:51:15.722259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.504 [2024-07-25 01:51:15.754263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.504 [2024-07-25 01:51:15.770015] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:00.504 [2024-07-25 01:51:15.770081] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.504 [2024-07-25 01:51:15.770154] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:00.504 [2024-07-25 01:51:15.770169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.504 [2024-07-25 01:51:15.770437] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:00.504 [2024-07-25 01:51:15.770456] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.504 [2024-07-25 01:51:15.770511] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:00.504 [2024-07-25 01:51:15.770525] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:00.763 [2024-07-25 01:51:15.834003] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.763 00:08:00.763 real 0m0.417s 00:08:00.763 user 0m0.209s 00:08:00.763 sys 0m0.114s 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.763 ************************************ 00:08:00.763 END TEST dd_unknown_flag 00:08:00.763 ************************************ 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:00.763 ************************************ 00:08:00.763 START TEST dd_invalid_json 00:08:00.763 ************************************ 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:00.763 01:51:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:00.763 [2024-07-25 01:51:16.013719] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:00.763 [2024-07-25 01:51:16.013823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76905 ] 00:08:01.023 [2024-07-25 01:51:16.134141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.023 [2024-07-25 01:51:16.154261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.023 [2024-07-25 01:51:16.194513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.023 [2024-07-25 01:51:16.194615] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:01.023 [2024-07-25 01:51:16.194632] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:01.023 [2024-07-25 01:51:16.194643] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.023 [2024-07-25 01:51:16.194684] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.023 00:08:01.023 real 0m0.307s 00:08:01.023 user 0m0.141s 00:08:01.023 sys 0m0.064s 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:01.023 ************************************ 00:08:01.023 END TEST dd_invalid_json 00:08:01.023 ************************************ 00:08:01.023 ************************************ 00:08:01.023 END TEST spdk_dd_negative 00:08:01.023 ************************************ 00:08:01.023 00:08:01.023 real 0m2.475s 00:08:01.023 user 0m1.186s 00:08:01.023 sys 0m0.924s 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.023 01:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.282 00:08:01.282 real 0m59.129s 00:08:01.282 user 0m37.373s 00:08:01.282 sys 0m24.615s 00:08:01.282 01:51:16 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.282 01:51:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:01.282 ************************************ 00:08:01.282 END TEST spdk_dd 00:08:01.282 ************************************ 00:08:01.282 01:51:16 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:01.282 01:51:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:01.282 01:51:16 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:01.282 01:51:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.282 01:51:16 -- common/autotest_common.sh@10 -- # set +x 00:08:01.282 01:51:16 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:01.282 01:51:16 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:01.282 01:51:16 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:08:01.282 01:51:16 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:08:01.283 01:51:16 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:08:01.283 01:51:16 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:08:01.283 01:51:16 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.283 01:51:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.283 01:51:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.283 01:51:16 -- common/autotest_common.sh@10 -- # set +x 00:08:01.283 ************************************ 00:08:01.283 START TEST nvmf_tcp 00:08:01.283 ************************************ 00:08:01.283 01:51:16 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.283 * Looking for test storage... 00:08:01.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:01.283 01:51:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:01.283 01:51:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:01.283 01:51:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:01.283 01:51:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.283 01:51:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.283 01:51:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.283 ************************************ 00:08:01.283 START TEST nvmf_target_core 00:08:01.283 ************************************ 00:08:01.283 01:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:01.543 * Looking for test storage... 00:08:01.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.543 ************************************ 00:08:01.543 START TEST nvmf_host_management 00:08:01.543 ************************************ 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:01.543 * Looking for test storage... 00:08:01.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:01.543 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:01.544 Cannot find device "nvmf_init_br" 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:01.544 Cannot find device "nvmf_tgt_br" 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.544 Cannot find device "nvmf_tgt_br2" 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:01.544 Cannot find device "nvmf_init_br" 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:01.544 Cannot find device "nvmf_tgt_br" 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:01.544 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:01.544 Cannot find device "nvmf_tgt_br2" 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:01.804 Cannot find device "nvmf_br" 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:01.804 Cannot find device "nvmf_init_if" 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:01.804 01:51:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:01.804 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:01.804 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:01.804 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:01.804 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:01.804 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:02.063 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:02.063 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:02.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:08:02.063 00:08:02.063 --- 10.0.0.2 ping statistics --- 00:08:02.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.063 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:02.063 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:02.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:02.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:02.063 00:08:02.063 --- 10.0.0.3 ping statistics --- 00:08:02.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.063 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:02.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:02.064 00:08:02.064 --- 10.0.0.1 ping statistics --- 00:08:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.064 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77191 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77191 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 77191 ']' 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.064 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.064 [2024-07-25 01:51:17.227477] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:02.064 [2024-07-25 01:51:17.227569] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.064 [2024-07-25 01:51:17.350752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.323 [2024-07-25 01:51:17.367818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.323 [2024-07-25 01:51:17.402775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.323 [2024-07-25 01:51:17.402865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.323 [2024-07-25 01:51:17.402877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.323 [2024-07-25 01:51:17.402889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.323 [2024-07-25 01:51:17.402895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.323 [2024-07-25 01:51:17.403428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.323 [2024-07-25 01:51:17.403951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.323 [2024-07-25 01:51:17.404039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.323 [2024-07-25 01:51:17.404044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.323 [2024-07-25 01:51:17.432710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 [2024-07-25 01:51:17.524263] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:02.323 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:02.324 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.324 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.324 Malloc0 00:08:02.324 [2024-07-25 01:51:17.603218] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.324 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.324 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:02.324 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.324 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77237 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77237 /var/tmp/bdevperf.sock 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 77237 ']' 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:02.583 { 00:08:02.583 "params": { 00:08:02.583 "name": "Nvme$subsystem", 00:08:02.583 "trtype": "$TEST_TRANSPORT", 00:08:02.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.583 "adrfam": "ipv4", 00:08:02.583 "trsvcid": "$NVMF_PORT", 00:08:02.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.583 "hdgst": ${hdgst:-false}, 00:08:02.583 "ddgst": ${ddgst:-false} 00:08:02.583 }, 00:08:02.583 "method": "bdev_nvme_attach_controller" 00:08:02.583 } 00:08:02.583 EOF 00:08:02.583 )") 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:02.583 01:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:02.583 "params": { 00:08:02.583 "name": "Nvme0", 00:08:02.583 "trtype": "tcp", 00:08:02.583 "traddr": "10.0.0.2", 00:08:02.583 "adrfam": "ipv4", 00:08:02.583 "trsvcid": "4420", 00:08:02.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:02.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:02.583 "hdgst": false, 00:08:02.583 "ddgst": false 00:08:02.583 }, 00:08:02.583 "method": "bdev_nvme_attach_controller" 00:08:02.583 }' 00:08:02.583 [2024-07-25 01:51:17.709858] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:02.583 [2024-07-25 01:51:17.709933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77237 ] 00:08:02.583 [2024-07-25 01:51:17.835692] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.583 [2024-07-25 01:51:17.855715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.842 [2024-07-25 01:51:17.897328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.842 [2024-07-25 01:51:17.938128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.842 Running I/O for 10 seconds... 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.842 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.101 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:03.101 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:03.101 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.361 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:03.362 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.362 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.362 [2024-07-25 01:51:18.485582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.485980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.485990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.362 [2024-07-25 01:51:18.486482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.362 [2024-07-25 01:51:18.486490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.486982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.486993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.487002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.487025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.487045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:03.363 [2024-07-25 01:51:18.487065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a38990 is same with the state(5) to be set 00:08:03.363 [2024-07-25 01:51:18.487122] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a38990 was disconnected and freed. reset controller. 00:08:03.363 [2024-07-25 01:51:18.487275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:03.363 [2024-07-25 01:51:18.487314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:03.363 [2024-07-25 01:51:18.487334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:03.363 [2024-07-25 01:51:18.487352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:03.363 [2024-07-25 01:51:18.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:03.363 [2024-07-25 01:51:18.487378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159ce60 is same with the state(5) to be set 00:08:03.363 [2024-07-25 01:51:18.488533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:03.363 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:03.363 00:08:03.363 Latency(us) 00:08:03.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.363 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:03.363 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:03.363 Verification LBA range: start 0x0 length 0x400 00:08:03.363 Nvme0n1 : 0.45 1421.20 88.82 142.12 0.00 39337.52 2293.76 43372.92 00:08:03.363 =================================================================================================================== 00:08:03.363 Total : 1421.20 88.82 142.12 0.00 39337.52 2293.76 43372.92 00:08:03.363 [2024-07-25 01:51:18.490581] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.363 [2024-07-25 01:51:18.490609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159ce60 (9): Bad file descriptor 00:08:03.363 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.363 01:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:03.363 [2024-07-25 01:51:18.495343] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77237 00:08:04.301 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77237) - No such process 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:04.301 { 00:08:04.301 "params": { 00:08:04.301 "name": "Nvme$subsystem", 00:08:04.301 "trtype": "$TEST_TRANSPORT", 00:08:04.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.301 "adrfam": "ipv4", 00:08:04.301 "trsvcid": "$NVMF_PORT", 00:08:04.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.301 "hdgst": ${hdgst:-false}, 00:08:04.301 "ddgst": ${ddgst:-false} 00:08:04.301 }, 00:08:04.301 "method": "bdev_nvme_attach_controller" 00:08:04.301 } 00:08:04.301 EOF 00:08:04.301 )") 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:04.301 01:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:04.301 "params": { 00:08:04.301 "name": "Nvme0", 00:08:04.301 "trtype": "tcp", 00:08:04.301 "traddr": "10.0.0.2", 00:08:04.301 "adrfam": "ipv4", 00:08:04.301 "trsvcid": "4420", 00:08:04.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:04.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:04.301 "hdgst": false, 00:08:04.301 "ddgst": false 00:08:04.301 }, 00:08:04.301 "method": "bdev_nvme_attach_controller" 00:08:04.301 }' 00:08:04.301 [2024-07-25 01:51:19.544299] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:04.301 [2024-07-25 01:51:19.544358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77277 ] 00:08:04.560 [2024-07-25 01:51:19.660604] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.560 [2024-07-25 01:51:19.678476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.560 [2024-07-25 01:51:19.712278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.560 [2024-07-25 01:51:19.748631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.560 Running I/O for 1 seconds... 00:08:05.939 00:08:05.939 Latency(us) 00:08:05.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.939 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:05.939 Verification LBA range: start 0x0 length 0x400 00:08:05.939 Nvme0n1 : 1.03 1674.00 104.63 0.00 0.00 37480.98 3932.16 38368.35 00:08:05.939 =================================================================================================================== 00:08:05.939 Total : 1674.00 104.63 0.00 0.00 37480.98 3932.16 38368.35 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.939 rmmod nvme_tcp 00:08:05.939 rmmod nvme_fabrics 00:08:05.939 rmmod nvme_keyring 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77191 ']' 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77191 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 77191 ']' 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 77191 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77191 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:05.939 killing process with pid 77191 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77191' 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 77191 00:08:05.939 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 77191 00:08:06.209 [2024-07-25 01:51:21.287038] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:06.209 ************************************ 00:08:06.209 END TEST nvmf_host_management 00:08:06.209 ************************************ 00:08:06.209 00:08:06.209 real 0m4.689s 00:08:06.209 user 0m17.578s 00:08:06.209 sys 0m1.240s 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.209 ************************************ 00:08:06.209 START TEST nvmf_lvol 00:08:06.209 ************************************ 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.209 * Looking for test storage... 00:08:06.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.209 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.210 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:06.482 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:06.483 Cannot find device "nvmf_tgt_br" 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.483 Cannot find device "nvmf_tgt_br2" 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:06.483 Cannot find device "nvmf_tgt_br" 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:06.483 Cannot find device "nvmf_tgt_br2" 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.483 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:06.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:08:06.741 00:08:06.741 --- 10.0.0.2 ping statistics --- 00:08:06.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.741 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:06.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:06.741 00:08:06.741 --- 10.0.0.3 ping statistics --- 00:08:06.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.741 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:06.741 00:08:06.741 --- 10.0.0.1 ping statistics --- 00:08:06.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.741 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.741 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=77492 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 77492 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 77492 ']' 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.742 01:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.742 [2024-07-25 01:51:21.878075] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:06.742 [2024-07-25 01:51:21.878151] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.742 [2024-07-25 01:51:21.995079] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.742 [2024-07-25 01:51:22.013081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.000 [2024-07-25 01:51:22.045892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.000 [2024-07-25 01:51:22.045949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.000 [2024-07-25 01:51:22.045958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.000 [2024-07-25 01:51:22.045965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.000 [2024-07-25 01:51:22.045971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.000 [2024-07-25 01:51:22.046125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.000 [2024-07-25 01:51:22.046261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.000 [2024-07-25 01:51:22.046265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.000 [2024-07-25 01:51:22.073792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.000 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.257 [2024-07-25 01:51:22.348345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.257 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.516 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:07.516 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.774 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:07.775 01:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:08.033 01:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:08.291 01:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c7b368bb-8c62-4a68-9f37-177a07610da6 00:08:08.291 01:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c7b368bb-8c62-4a68-9f37-177a07610da6 lvol 20 00:08:08.548 01:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2b25c6a3-fcc2-4fe6-9b76-4db0253ca636 00:08:08.548 01:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.806 01:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2b25c6a3-fcc2-4fe6-9b76-4db0253ca636 00:08:09.063 01:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.063 [2024-07-25 01:51:24.351427] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.322 01:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.322 01:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77549 00:08:09.322 01:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:09.322 01:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:10.697 01:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2b25c6a3-fcc2-4fe6-9b76-4db0253ca636 MY_SNAPSHOT 00:08:10.697 01:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=68523bbe-9990-43cb-bea8-ffc6e1a37b5e 00:08:10.697 01:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2b25c6a3-fcc2-4fe6-9b76-4db0253ca636 30 00:08:10.955 01:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 68523bbe-9990-43cb-bea8-ffc6e1a37b5e MY_CLONE 00:08:11.213 01:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8453b072-d2f5-45c2-8464-631c0215febb 00:08:11.213 01:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8453b072-d2f5-45c2-8464-631c0215febb 00:08:11.779 01:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77549 00:08:19.891 Initializing NVMe Controllers 00:08:19.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:19.891 Controller IO queue size 128, less than required. 00:08:19.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:19.891 Initialization complete. Launching workers. 00:08:19.891 ======================================================== 00:08:19.891 Latency(us) 00:08:19.891 Device Information : IOPS MiB/s Average min max 00:08:19.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11217.30 43.82 11415.26 2530.17 61947.82 00:08:19.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11220.40 43.83 11410.59 433.77 81181.56 00:08:19.891 ======================================================== 00:08:19.891 Total : 22437.69 87.65 11412.92 433.77 81181.56 00:08:19.891 00:08:19.891 01:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.891 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2b25c6a3-fcc2-4fe6-9b76-4db0253ca636 00:08:20.150 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7b368bb-8c62-4a68-9f37-177a07610da6 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.409 rmmod nvme_tcp 00:08:20.409 rmmod nvme_fabrics 00:08:20.409 rmmod nvme_keyring 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 77492 ']' 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 77492 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 77492 ']' 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 77492 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77492 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.409 killing process with pid 77492 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77492' 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 77492 00:08:20.409 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 77492 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:20.668 00:08:20.668 real 0m14.472s 00:08:20.668 user 1m1.624s 00:08:20.668 sys 0m3.970s 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.668 ************************************ 00:08:20.668 END TEST nvmf_lvol 00:08:20.668 ************************************ 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.668 ************************************ 00:08:20.668 START TEST nvmf_lvs_grow 00:08:20.668 ************************************ 00:08:20.668 01:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.927 * Looking for test storage... 00:08:20.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:08:20.927 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:20.928 Cannot find device "nvmf_tgt_br" 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:20.928 Cannot find device "nvmf_tgt_br2" 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:20.928 Cannot find device "nvmf_tgt_br" 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:20.928 Cannot find device "nvmf_tgt_br2" 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:20.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:20.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:20.928 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:21.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:08:21.188 00:08:21.188 --- 10.0.0.2 ping statistics --- 00:08:21.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.188 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:21.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:21.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:21.188 00:08:21.188 --- 10.0.0.3 ping statistics --- 00:08:21.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.188 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:21.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:21.188 00:08:21.188 --- 10.0.0.1 ping statistics --- 00:08:21.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.188 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=77875 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 77875 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 77875 ']' 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.188 01:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.188 [2024-07-25 01:51:36.425354] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:21.188 [2024-07-25 01:51:36.425432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.447 [2024-07-25 01:51:36.547755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.447 [2024-07-25 01:51:36.562767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.447 [2024-07-25 01:51:36.594642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.447 [2024-07-25 01:51:36.594708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.447 [2024-07-25 01:51:36.594718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.447 [2024-07-25 01:51:36.594725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.447 [2024-07-25 01:51:36.594731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.448 [2024-07-25 01:51:36.594760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.448 [2024-07-25 01:51:36.620951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:22.385 [2024-07-25 01:51:37.602741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.385 ************************************ 00:08:22.385 START TEST lvs_grow_clean 00:08:22.385 ************************************ 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:22.385 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.644 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:22.644 01:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:22.903 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=095390c4-000e-4423-aaec-b68b471b192d 00:08:22.903 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:22.903 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:23.162 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:23.162 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:23.162 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 095390c4-000e-4423-aaec-b68b471b192d lvol 150 00:08:23.420 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c7a14ebb-c633-47c2-88dd-64dea504e8b4 00:08:23.420 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:23.420 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:23.679 [2024-07-25 01:51:38.735584] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:23.679 [2024-07-25 01:51:38.735673] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:23.679 true 00:08:23.679 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:23.679 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:23.679 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:23.679 01:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:23.937 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7a14ebb-c633-47c2-88dd-64dea504e8b4 00:08:24.196 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:24.455 [2024-07-25 01:51:39.592101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.455 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77952 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77952 /var/tmp/bdevperf.sock 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 77952 ']' 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.730 01:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:24.730 [2024-07-25 01:51:39.840721] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:24.730 [2024-07-25 01:51:39.840796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77952 ] 00:08:24.730 [2024-07-25 01:51:39.956659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.730 [2024-07-25 01:51:39.978471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.014 [2024-07-25 01:51:40.023813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.014 [2024-07-25 01:51:40.058554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.581 01:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.581 01:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:25.581 01:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:25.840 Nvme0n1 00:08:25.840 01:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:26.099 [ 00:08:26.099 { 00:08:26.099 "name": "Nvme0n1", 00:08:26.099 "aliases": [ 00:08:26.099 "c7a14ebb-c633-47c2-88dd-64dea504e8b4" 00:08:26.099 ], 00:08:26.099 "product_name": "NVMe disk", 00:08:26.099 "block_size": 4096, 00:08:26.099 "num_blocks": 38912, 00:08:26.099 "uuid": "c7a14ebb-c633-47c2-88dd-64dea504e8b4", 00:08:26.099 "assigned_rate_limits": { 00:08:26.099 "rw_ios_per_sec": 0, 00:08:26.099 "rw_mbytes_per_sec": 0, 00:08:26.099 "r_mbytes_per_sec": 0, 00:08:26.099 "w_mbytes_per_sec": 0 00:08:26.099 }, 00:08:26.099 "claimed": false, 00:08:26.099 "zoned": false, 00:08:26.099 "supported_io_types": { 00:08:26.099 "read": true, 00:08:26.099 "write": true, 00:08:26.099 "unmap": true, 00:08:26.099 "flush": true, 00:08:26.099 "reset": true, 00:08:26.099 "nvme_admin": true, 00:08:26.099 "nvme_io": true, 00:08:26.099 "nvme_io_md": false, 00:08:26.099 "write_zeroes": true, 00:08:26.099 "zcopy": false, 00:08:26.099 "get_zone_info": false, 00:08:26.099 "zone_management": false, 00:08:26.099 "zone_append": false, 00:08:26.099 "compare": true, 00:08:26.099 "compare_and_write": true, 00:08:26.099 "abort": true, 00:08:26.099 "seek_hole": false, 00:08:26.099 "seek_data": false, 00:08:26.099 "copy": true, 00:08:26.099 "nvme_iov_md": false 00:08:26.099 }, 00:08:26.099 "memory_domains": [ 00:08:26.099 { 00:08:26.099 "dma_device_id": "system", 00:08:26.099 "dma_device_type": 1 00:08:26.099 } 00:08:26.099 ], 00:08:26.099 "driver_specific": { 00:08:26.099 "nvme": [ 00:08:26.099 { 00:08:26.099 "trid": { 00:08:26.099 "trtype": "TCP", 00:08:26.099 "adrfam": "IPv4", 00:08:26.099 "traddr": "10.0.0.2", 00:08:26.099 "trsvcid": "4420", 00:08:26.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:26.099 }, 00:08:26.099 "ctrlr_data": { 00:08:26.099 "cntlid": 1, 00:08:26.099 "vendor_id": "0x8086", 00:08:26.099 "model_number": "SPDK bdev Controller", 00:08:26.099 "serial_number": "SPDK0", 00:08:26.099 "firmware_revision": "24.09", 00:08:26.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.099 "oacs": { 00:08:26.099 "security": 0, 00:08:26.099 "format": 0, 00:08:26.099 "firmware": 0, 00:08:26.099 "ns_manage": 0 00:08:26.099 }, 00:08:26.099 "multi_ctrlr": true, 00:08:26.099 "ana_reporting": false 00:08:26.099 }, 00:08:26.099 "vs": { 00:08:26.099 "nvme_version": "1.3" 00:08:26.099 }, 00:08:26.099 "ns_data": { 00:08:26.099 "id": 1, 00:08:26.099 "can_share": true 00:08:26.099 } 00:08:26.099 } 00:08:26.099 ], 00:08:26.099 "mp_policy": "active_passive" 00:08:26.099 } 00:08:26.099 } 00:08:26.099 ] 00:08:26.099 01:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77976 00:08:26.099 01:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:26.099 01:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:26.099 Running I/O for 10 seconds... 00:08:27.056 Latency(us) 00:08:27.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.056 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:27.056 =================================================================================================================== 00:08:27.056 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:27.056 00:08:27.992 01:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:28.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.263 Nvme0n1 : 2.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:28.263 =================================================================================================================== 00:08:28.263 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:28.263 00:08:28.263 true 00:08:28.263 01:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:28.263 01:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:28.832 01:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:28.832 01:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:28.832 01:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 77976 00:08:29.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.091 Nvme0n1 : 3.00 6942.00 27.12 0.00 0.00 0.00 0.00 0.00 00:08:29.091 =================================================================================================================== 00:08:29.091 Total : 6942.00 27.12 0.00 0.00 0.00 0.00 0.00 00:08:29.091 00:08:30.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.029 Nvme0n1 : 4.00 6952.75 27.16 0.00 0.00 0.00 0.00 0.00 00:08:30.029 =================================================================================================================== 00:08:30.029 Total : 6952.75 27.16 0.00 0.00 0.00 0.00 0.00 00:08:30.029 00:08:31.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.406 Nvme0n1 : 5.00 6883.00 26.89 0.00 0.00 0.00 0.00 0.00 00:08:31.407 =================================================================================================================== 00:08:31.407 Total : 6883.00 26.89 0.00 0.00 0.00 0.00 0.00 00:08:31.407 00:08:32.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.345 Nvme0n1 : 6.00 6815.33 26.62 0.00 0.00 0.00 0.00 0.00 00:08:32.345 =================================================================================================================== 00:08:32.345 Total : 6815.33 26.62 0.00 0.00 0.00 0.00 0.00 00:08:32.345 00:08:33.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.283 Nvme0n1 : 7.00 6803.29 26.58 0.00 0.00 0.00 0.00 0.00 00:08:33.283 =================================================================================================================== 00:08:33.283 Total : 6803.29 26.58 0.00 0.00 0.00 0.00 0.00 00:08:33.283 00:08:34.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.221 Nvme0n1 : 8.00 6762.50 26.42 0.00 0.00 0.00 0.00 0.00 00:08:34.221 =================================================================================================================== 00:08:34.221 Total : 6762.50 26.42 0.00 0.00 0.00 0.00 0.00 00:08:34.221 00:08:35.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.160 Nvme0n1 : 9.00 6730.78 26.29 0.00 0.00 0.00 0.00 0.00 00:08:35.160 =================================================================================================================== 00:08:35.160 Total : 6730.78 26.29 0.00 0.00 0.00 0.00 0.00 00:08:35.160 00:08:36.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.103 Nvme0n1 : 10.00 6730.80 26.29 0.00 0.00 0.00 0.00 0.00 00:08:36.103 =================================================================================================================== 00:08:36.103 Total : 6730.80 26.29 0.00 0.00 0.00 0.00 0.00 00:08:36.103 00:08:36.103 00:08:36.103 Latency(us) 00:08:36.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.103 Nvme0n1 : 10.01 6726.10 26.27 0.00 0.00 19022.75 13405.09 44564.48 00:08:36.103 =================================================================================================================== 00:08:36.103 Total : 6726.10 26.27 0.00 0.00 19022.75 13405.09 44564.48 00:08:36.103 0 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77952 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 77952 ']' 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 77952 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77952 00:08:36.103 killing process with pid 77952 00:08:36.103 Received shutdown signal, test time was about 10.000000 seconds 00:08:36.103 00:08:36.103 Latency(us) 00:08:36.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.103 =================================================================================================================== 00:08:36.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77952' 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 77952 00:08:36.103 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 77952 00:08:36.362 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.621 01:51:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.880 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:36.880 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:37.140 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:37.140 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:37.140 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:37.399 [2024-07-25 01:51:52.556379] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:37.399 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:37.658 request: 00:08:37.658 { 00:08:37.658 "uuid": "095390c4-000e-4423-aaec-b68b471b192d", 00:08:37.658 "method": "bdev_lvol_get_lvstores", 00:08:37.658 "req_id": 1 00:08:37.658 } 00:08:37.658 Got JSON-RPC error response 00:08:37.658 response: 00:08:37.658 { 00:08:37.658 "code": -19, 00:08:37.658 "message": "No such device" 00:08:37.658 } 00:08:37.658 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:37.659 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.659 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.659 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.659 01:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.918 aio_bdev 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c7a14ebb-c633-47c2-88dd-64dea504e8b4 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c7a14ebb-c633-47c2-88dd-64dea504e8b4 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.918 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.177 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7a14ebb-c633-47c2-88dd-64dea504e8b4 -t 2000 00:08:38.437 [ 00:08:38.437 { 00:08:38.437 "name": "c7a14ebb-c633-47c2-88dd-64dea504e8b4", 00:08:38.437 "aliases": [ 00:08:38.437 "lvs/lvol" 00:08:38.437 ], 00:08:38.437 "product_name": "Logical Volume", 00:08:38.437 "block_size": 4096, 00:08:38.437 "num_blocks": 38912, 00:08:38.437 "uuid": "c7a14ebb-c633-47c2-88dd-64dea504e8b4", 00:08:38.437 "assigned_rate_limits": { 00:08:38.437 "rw_ios_per_sec": 0, 00:08:38.437 "rw_mbytes_per_sec": 0, 00:08:38.437 "r_mbytes_per_sec": 0, 00:08:38.437 "w_mbytes_per_sec": 0 00:08:38.437 }, 00:08:38.437 "claimed": false, 00:08:38.437 "zoned": false, 00:08:38.437 "supported_io_types": { 00:08:38.437 "read": true, 00:08:38.437 "write": true, 00:08:38.437 "unmap": true, 00:08:38.437 "flush": false, 00:08:38.437 "reset": true, 00:08:38.437 "nvme_admin": false, 00:08:38.437 "nvme_io": false, 00:08:38.437 "nvme_io_md": false, 00:08:38.437 "write_zeroes": true, 00:08:38.437 "zcopy": false, 00:08:38.437 "get_zone_info": false, 00:08:38.437 "zone_management": false, 00:08:38.437 "zone_append": false, 00:08:38.437 "compare": false, 00:08:38.437 "compare_and_write": false, 00:08:38.437 "abort": false, 00:08:38.437 "seek_hole": true, 00:08:38.437 "seek_data": true, 00:08:38.437 "copy": false, 00:08:38.437 "nvme_iov_md": false 00:08:38.437 }, 00:08:38.437 "driver_specific": { 00:08:38.437 "lvol": { 00:08:38.437 "lvol_store_uuid": "095390c4-000e-4423-aaec-b68b471b192d", 00:08:38.437 "base_bdev": "aio_bdev", 00:08:38.437 "thin_provision": false, 00:08:38.437 "num_allocated_clusters": 38, 00:08:38.437 "snapshot": false, 00:08:38.437 "clone": false, 00:08:38.437 "esnap_clone": false 00:08:38.437 } 00:08:38.437 } 00:08:38.437 } 00:08:38.437 ] 00:08:38.437 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:38.437 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:38.437 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.696 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.696 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:38.696 01:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:38.956 01:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:38.956 01:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c7a14ebb-c633-47c2-88dd-64dea504e8b4 00:08:39.215 01:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 095390c4-000e-4423-aaec-b68b471b192d 00:08:39.475 01:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.475 01:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.042 ************************************ 00:08:40.042 END TEST lvs_grow_clean 00:08:40.042 ************************************ 00:08:40.042 00:08:40.042 real 0m17.452s 00:08:40.042 user 0m16.507s 00:08:40.042 sys 0m2.300s 00:08:40.042 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.042 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.043 ************************************ 00:08:40.043 START TEST lvs_grow_dirty 00:08:40.043 ************************************ 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.043 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.302 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.302 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.561 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2283b536-f07c-475e-941a-bc2023b6eade 00:08:40.561 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:40.561 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:40.820 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:40.820 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:40.820 01:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2283b536-f07c-475e-941a-bc2023b6eade lvol 150 00:08:41.079 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:41.079 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:41.079 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.339 [2024-07-25 01:51:56.383702] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.339 [2024-07-25 01:51:56.383782] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.339 true 00:08:41.339 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:41.339 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.339 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.339 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.598 01:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:41.857 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.129 [2024-07-25 01:51:57.312279] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.129 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78221 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:42.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78221 /var/tmp/bdevperf.sock 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 78221 ']' 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.400 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.400 [2024-07-25 01:51:57.611468] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:42.400 [2024-07-25 01:51:57.612382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78221 ] 00:08:42.659 [2024-07-25 01:51:57.734293] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.659 [2024-07-25 01:51:57.753121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.659 [2024-07-25 01:51:57.795521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.659 [2024-07-25 01:51:57.830523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.659 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.659 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:42.659 01:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.918 Nvme0n1 00:08:42.918 01:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:43.178 [ 00:08:43.178 { 00:08:43.178 "name": "Nvme0n1", 00:08:43.178 "aliases": [ 00:08:43.178 "403370c7-0dbe-4d47-a2ea-d848fab99e7e" 00:08:43.178 ], 00:08:43.178 "product_name": "NVMe disk", 00:08:43.178 "block_size": 4096, 00:08:43.178 "num_blocks": 38912, 00:08:43.178 "uuid": "403370c7-0dbe-4d47-a2ea-d848fab99e7e", 00:08:43.178 "assigned_rate_limits": { 00:08:43.178 "rw_ios_per_sec": 0, 00:08:43.178 "rw_mbytes_per_sec": 0, 00:08:43.178 "r_mbytes_per_sec": 0, 00:08:43.178 "w_mbytes_per_sec": 0 00:08:43.178 }, 00:08:43.178 "claimed": false, 00:08:43.178 "zoned": false, 00:08:43.178 "supported_io_types": { 00:08:43.178 "read": true, 00:08:43.178 "write": true, 00:08:43.178 "unmap": true, 00:08:43.178 "flush": true, 00:08:43.178 "reset": true, 00:08:43.178 "nvme_admin": true, 00:08:43.178 "nvme_io": true, 00:08:43.178 "nvme_io_md": false, 00:08:43.178 "write_zeroes": true, 00:08:43.178 "zcopy": false, 00:08:43.178 "get_zone_info": false, 00:08:43.178 "zone_management": false, 00:08:43.178 "zone_append": false, 00:08:43.178 "compare": true, 00:08:43.178 "compare_and_write": true, 00:08:43.178 "abort": true, 00:08:43.178 "seek_hole": false, 00:08:43.178 "seek_data": false, 00:08:43.178 "copy": true, 00:08:43.178 "nvme_iov_md": false 00:08:43.178 }, 00:08:43.178 "memory_domains": [ 00:08:43.178 { 00:08:43.178 "dma_device_id": "system", 00:08:43.178 "dma_device_type": 1 00:08:43.178 } 00:08:43.178 ], 00:08:43.178 "driver_specific": { 00:08:43.178 "nvme": [ 00:08:43.178 { 00:08:43.178 "trid": { 00:08:43.178 "trtype": "TCP", 00:08:43.178 "adrfam": "IPv4", 00:08:43.178 "traddr": "10.0.0.2", 00:08:43.178 "trsvcid": "4420", 00:08:43.178 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:43.178 }, 00:08:43.178 "ctrlr_data": { 00:08:43.178 "cntlid": 1, 00:08:43.178 "vendor_id": "0x8086", 00:08:43.178 "model_number": "SPDK bdev Controller", 00:08:43.178 "serial_number": "SPDK0", 00:08:43.178 "firmware_revision": "24.09", 00:08:43.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.178 "oacs": { 00:08:43.178 "security": 0, 00:08:43.178 "format": 0, 00:08:43.178 "firmware": 0, 00:08:43.178 "ns_manage": 0 00:08:43.178 }, 00:08:43.178 "multi_ctrlr": true, 00:08:43.178 "ana_reporting": false 00:08:43.178 }, 00:08:43.178 "vs": { 00:08:43.178 "nvme_version": "1.3" 00:08:43.178 }, 00:08:43.178 "ns_data": { 00:08:43.178 "id": 1, 00:08:43.178 "can_share": true 00:08:43.178 } 00:08:43.178 } 00:08:43.178 ], 00:08:43.178 "mp_policy": "active_passive" 00:08:43.178 } 00:08:43.178 } 00:08:43.178 ] 00:08:43.178 01:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78226 00:08:43.178 01:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:43.178 01:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:43.436 Running I/O for 10 seconds... 00:08:44.374 Latency(us) 00:08:44.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.374 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:44.374 =================================================================================================================== 00:08:44.374 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:44.374 00:08:45.310 01:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:45.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.310 Nvme0n1 : 2.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:45.310 =================================================================================================================== 00:08:45.310 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:45.310 00:08:45.569 true 00:08:45.569 01:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:45.569 01:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.828 01:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.828 01:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.828 01:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 78226 00:08:46.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.395 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:46.395 =================================================================================================================== 00:08:46.395 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:46.395 00:08:47.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.331 Nvme0n1 : 4.00 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:47.331 =================================================================================================================== 00:08:47.331 Total : 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:47.331 00:08:48.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.267 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:08:48.267 =================================================================================================================== 00:08:48.267 Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:08:48.267 00:08:49.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.644 Nvme0n1 : 6.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:49.644 =================================================================================================================== 00:08:49.644 Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:49.644 00:08:50.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.211 Nvme0n1 : 7.00 6749.14 26.36 0.00 0.00 0.00 0.00 0.00 00:08:50.211 =================================================================================================================== 00:08:50.211 Total : 6749.14 26.36 0.00 0.00 0.00 0.00 0.00 00:08:50.212 00:08:51.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.587 Nvme0n1 : 8.00 6746.88 26.35 0.00 0.00 0.00 0.00 0.00 00:08:51.587 =================================================================================================================== 00:08:51.587 Total : 6746.88 26.35 0.00 0.00 0.00 0.00 0.00 00:08:51.587 00:08:52.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.522 Nvme0n1 : 9.00 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:08:52.522 =================================================================================================================== 00:08:52.522 Total : 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:08:52.522 00:08:53.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.458 Nvme0n1 : 10.00 6593.20 25.75 0.00 0.00 0.00 0.00 0.00 00:08:53.458 =================================================================================================================== 00:08:53.458 Total : 6593.20 25.75 0.00 0.00 0.00 0.00 0.00 00:08:53.458 00:08:53.458 00:08:53.458 Latency(us) 00:08:53.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.458 Nvme0n1 : 10.01 6599.49 25.78 0.00 0.00 19390.39 11081.54 224013.96 00:08:53.458 =================================================================================================================== 00:08:53.458 Total : 6599.49 25.78 0.00 0.00 19390.39 11081.54 224013.96 00:08:53.458 0 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78221 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 78221 ']' 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 78221 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78221 00:08:53.458 killing process with pid 78221 00:08:53.458 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.458 00:08:53.458 Latency(us) 00:08:53.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.458 =================================================================================================================== 00:08:53.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78221' 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 78221 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 78221 00:08:53.458 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.717 01:52:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77875 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77875 00:08:54.284 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77875 Killed "${NVMF_APP[@]}" "$@" 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=78364 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 78364 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 78364 ']' 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:54.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.284 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.284 [2024-07-25 01:52:09.567080] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:54.284 [2024-07-25 01:52:09.567175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.542 [2024-07-25 01:52:09.691574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.542 [2024-07-25 01:52:09.710897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.542 [2024-07-25 01:52:09.744607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.542 [2024-07-25 01:52:09.744656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.542 [2024-07-25 01:52:09.744681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.542 [2024-07-25 01:52:09.744689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.542 [2024-07-25 01:52:09.744695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.542 [2024-07-25 01:52:09.744735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.542 [2024-07-25 01:52:09.773784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.542 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.542 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:54.542 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.542 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.542 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.800 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.800 01:52:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.800 [2024-07-25 01:52:10.050462] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:54.800 [2024-07-25 01:52:10.051032] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:54.800 [2024-07-25 01:52:10.051376] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.800 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:55.059 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 403370c7-0dbe-4d47-a2ea-d848fab99e7e -t 2000 00:08:55.318 [ 00:08:55.318 { 00:08:55.318 "name": "403370c7-0dbe-4d47-a2ea-d848fab99e7e", 00:08:55.318 "aliases": [ 00:08:55.318 "lvs/lvol" 00:08:55.318 ], 00:08:55.318 "product_name": "Logical Volume", 00:08:55.318 "block_size": 4096, 00:08:55.318 "num_blocks": 38912, 00:08:55.318 "uuid": "403370c7-0dbe-4d47-a2ea-d848fab99e7e", 00:08:55.318 "assigned_rate_limits": { 00:08:55.318 "rw_ios_per_sec": 0, 00:08:55.318 "rw_mbytes_per_sec": 0, 00:08:55.318 "r_mbytes_per_sec": 0, 00:08:55.318 "w_mbytes_per_sec": 0 00:08:55.318 }, 00:08:55.318 "claimed": false, 00:08:55.318 "zoned": false, 00:08:55.318 "supported_io_types": { 00:08:55.318 "read": true, 00:08:55.318 "write": true, 00:08:55.318 "unmap": true, 00:08:55.318 "flush": false, 00:08:55.318 "reset": true, 00:08:55.318 "nvme_admin": false, 00:08:55.318 "nvme_io": false, 00:08:55.318 "nvme_io_md": false, 00:08:55.318 "write_zeroes": true, 00:08:55.318 "zcopy": false, 00:08:55.318 "get_zone_info": false, 00:08:55.318 "zone_management": false, 00:08:55.318 "zone_append": false, 00:08:55.318 "compare": false, 00:08:55.318 "compare_and_write": false, 00:08:55.318 "abort": false, 00:08:55.318 "seek_hole": true, 00:08:55.318 "seek_data": true, 00:08:55.318 "copy": false, 00:08:55.318 "nvme_iov_md": false 00:08:55.318 }, 00:08:55.318 "driver_specific": { 00:08:55.318 "lvol": { 00:08:55.318 "lvol_store_uuid": "2283b536-f07c-475e-941a-bc2023b6eade", 00:08:55.318 "base_bdev": "aio_bdev", 00:08:55.318 "thin_provision": false, 00:08:55.318 "num_allocated_clusters": 38, 00:08:55.318 "snapshot": false, 00:08:55.318 "clone": false, 00:08:55.318 "esnap_clone": false 00:08:55.318 } 00:08:55.318 } 00:08:55.318 } 00:08:55.318 ] 00:08:55.318 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:55.318 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:55.318 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:55.577 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:55.577 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:55.577 01:52:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:55.836 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:55.836 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.094 [2024-07-25 01:52:11.248626] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:56.094 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:56.351 request: 00:08:56.351 { 00:08:56.351 "uuid": "2283b536-f07c-475e-941a-bc2023b6eade", 00:08:56.351 "method": "bdev_lvol_get_lvstores", 00:08:56.351 "req_id": 1 00:08:56.351 } 00:08:56.351 Got JSON-RPC error response 00:08:56.351 response: 00:08:56.351 { 00:08:56.351 "code": -19, 00:08:56.351 "message": "No such device" 00:08:56.351 } 00:08:56.351 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:56.351 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.351 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:56.351 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.351 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.610 aio_bdev 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.610 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.868 01:52:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 403370c7-0dbe-4d47-a2ea-d848fab99e7e -t 2000 00:08:56.868 [ 00:08:56.868 { 00:08:56.868 "name": "403370c7-0dbe-4d47-a2ea-d848fab99e7e", 00:08:56.868 "aliases": [ 00:08:56.868 "lvs/lvol" 00:08:56.868 ], 00:08:56.868 "product_name": "Logical Volume", 00:08:56.868 "block_size": 4096, 00:08:56.868 "num_blocks": 38912, 00:08:56.868 "uuid": "403370c7-0dbe-4d47-a2ea-d848fab99e7e", 00:08:56.868 "assigned_rate_limits": { 00:08:56.868 "rw_ios_per_sec": 0, 00:08:56.868 "rw_mbytes_per_sec": 0, 00:08:56.868 "r_mbytes_per_sec": 0, 00:08:56.868 "w_mbytes_per_sec": 0 00:08:56.868 }, 00:08:56.868 "claimed": false, 00:08:56.868 "zoned": false, 00:08:56.868 "supported_io_types": { 00:08:56.868 "read": true, 00:08:56.868 "write": true, 00:08:56.868 "unmap": true, 00:08:56.868 "flush": false, 00:08:56.868 "reset": true, 00:08:56.868 "nvme_admin": false, 00:08:56.868 "nvme_io": false, 00:08:56.868 "nvme_io_md": false, 00:08:56.868 "write_zeroes": true, 00:08:56.868 "zcopy": false, 00:08:56.868 "get_zone_info": false, 00:08:56.868 "zone_management": false, 00:08:56.868 "zone_append": false, 00:08:56.868 "compare": false, 00:08:56.868 "compare_and_write": false, 00:08:56.868 "abort": false, 00:08:56.868 "seek_hole": true, 00:08:56.868 "seek_data": true, 00:08:56.868 "copy": false, 00:08:56.868 "nvme_iov_md": false 00:08:56.868 }, 00:08:56.868 "driver_specific": { 00:08:56.868 "lvol": { 00:08:56.868 "lvol_store_uuid": "2283b536-f07c-475e-941a-bc2023b6eade", 00:08:56.868 "base_bdev": "aio_bdev", 00:08:56.868 "thin_provision": false, 00:08:56.868 "num_allocated_clusters": 38, 00:08:56.868 "snapshot": false, 00:08:56.868 "clone": false, 00:08:56.868 "esnap_clone": false 00:08:56.868 } 00:08:56.868 } 00:08:56.868 } 00:08:56.868 ] 00:08:56.868 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:56.868 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:56.868 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.126 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.126 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:57.126 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:57.384 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:57.384 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 403370c7-0dbe-4d47-a2ea-d848fab99e7e 00:08:57.643 01:52:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2283b536-f07c-475e-941a-bc2023b6eade 00:08:57.901 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.159 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:58.417 ************************************ 00:08:58.417 END TEST lvs_grow_dirty 00:08:58.417 ************************************ 00:08:58.417 00:08:58.417 real 0m18.430s 00:08:58.417 user 0m38.460s 00:08:58.417 sys 0m9.058s 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:58.417 nvmf_trace.0 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.417 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:58.676 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.676 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:58.676 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.676 01:52:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.676 rmmod nvme_tcp 00:08:58.676 rmmod nvme_fabrics 00:08:58.934 rmmod nvme_keyring 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 78364 ']' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 78364 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 78364 ']' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 78364 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78364 00:08:58.934 killing process with pid 78364 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78364' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 78364 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 78364 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:58.934 00:08:58.934 real 0m38.288s 00:08:58.934 user 1m0.180s 00:08:58.934 sys 0m12.148s 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.934 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.934 ************************************ 00:08:58.934 END TEST nvmf_lvs_grow 00:08:58.934 ************************************ 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.194 ************************************ 00:08:59.194 START TEST nvmf_bdev_io_wait 00:08:59.194 ************************************ 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.194 * Looking for test storage... 00:08:59.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.194 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:59.195 Cannot find device "nvmf_tgt_br" 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:59.195 Cannot find device "nvmf_tgt_br2" 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:59.195 Cannot find device "nvmf_tgt_br" 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:59.195 Cannot find device "nvmf_tgt_br2" 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:59.195 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:59.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:59.454 00:08:59.454 --- 10.0.0.2 ping statistics --- 00:08:59.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.454 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:59.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:59.454 00:08:59.454 --- 10.0.0.3 ping statistics --- 00:08:59.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.454 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:59.454 00:08:59.454 --- 10.0.0.1 ping statistics --- 00:08:59.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.454 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:59.454 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=78664 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 78664 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 78664 ']' 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.455 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 [2024-07-25 01:52:14.769227] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:59.713 [2024-07-25 01:52:14.769310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.713 [2024-07-25 01:52:14.894897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.713 [2024-07-25 01:52:14.913373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.713 [2024-07-25 01:52:14.955110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.713 [2024-07-25 01:52:14.955419] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.713 [2024-07-25 01:52:14.955653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.713 [2024-07-25 01:52:14.955807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.713 [2024-07-25 01:52:14.955895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.713 [2024-07-25 01:52:14.956048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.713 [2024-07-25 01:52:14.956123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.713 [2024-07-25 01:52:14.956585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.713 [2024-07-25 01:52:14.956626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.713 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.713 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:59.713 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.713 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.713 01:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 [2024-07-25 01:52:15.087018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 [2024-07-25 01:52:15.098167] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 Malloc0 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.973 [2024-07-25 01:52:15.168630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78692 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:59.973 { 00:08:59.973 "params": { 00:08:59.973 "name": "Nvme$subsystem", 00:08:59.973 "trtype": "$TEST_TRANSPORT", 00:08:59.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.973 "adrfam": "ipv4", 00:08:59.973 "trsvcid": "$NVMF_PORT", 00:08:59.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.973 "hdgst": ${hdgst:-false}, 00:08:59.973 "ddgst": ${ddgst:-false} 00:08:59.973 }, 00:08:59.973 "method": "bdev_nvme_attach_controller" 00:08:59.973 } 00:08:59.973 EOF 00:08:59.973 )") 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78694 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:59.973 { 00:08:59.973 "params": { 00:08:59.973 "name": "Nvme$subsystem", 00:08:59.973 "trtype": "$TEST_TRANSPORT", 00:08:59.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.973 "adrfam": "ipv4", 00:08:59.973 "trsvcid": "$NVMF_PORT", 00:08:59.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.973 "hdgst": ${hdgst:-false}, 00:08:59.973 "ddgst": ${ddgst:-false} 00:08:59.973 }, 00:08:59.973 "method": "bdev_nvme_attach_controller" 00:08:59.973 } 00:08:59.973 EOF 00:08:59.973 )") 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78697 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:59.973 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:59.974 { 00:08:59.974 "params": { 00:08:59.974 "name": "Nvme$subsystem", 00:08:59.974 "trtype": "$TEST_TRANSPORT", 00:08:59.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.974 "adrfam": "ipv4", 00:08:59.974 "trsvcid": "$NVMF_PORT", 00:08:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.974 "hdgst": ${hdgst:-false}, 00:08:59.974 "ddgst": ${ddgst:-false} 00:08:59.974 }, 00:08:59.974 "method": "bdev_nvme_attach_controller" 00:08:59.974 } 00:08:59.974 EOF 00:08:59.974 )") 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78701 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:59.974 { 00:08:59.974 "params": { 00:08:59.974 "name": "Nvme$subsystem", 00:08:59.974 "trtype": "$TEST_TRANSPORT", 00:08:59.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.974 "adrfam": "ipv4", 00:08:59.974 "trsvcid": "$NVMF_PORT", 00:08:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.974 "hdgst": ${hdgst:-false}, 00:08:59.974 "ddgst": ${ddgst:-false} 00:08:59.974 }, 00:08:59.974 "method": "bdev_nvme_attach_controller" 00:08:59.974 } 00:08:59.974 EOF 00:08:59.974 )") 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:59.974 "params": { 00:08:59.974 "name": "Nvme1", 00:08:59.974 "trtype": "tcp", 00:08:59.974 "traddr": "10.0.0.2", 00:08:59.974 "adrfam": "ipv4", 00:08:59.974 "trsvcid": "4420", 00:08:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.974 "hdgst": false, 00:08:59.974 "ddgst": false 00:08:59.974 }, 00:08:59.974 "method": "bdev_nvme_attach_controller" 00:08:59.974 }' 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:59.974 "params": { 00:08:59.974 "name": "Nvme1", 00:08:59.974 "trtype": "tcp", 00:08:59.974 "traddr": "10.0.0.2", 00:08:59.974 "adrfam": "ipv4", 00:08:59.974 "trsvcid": "4420", 00:08:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.974 "hdgst": false, 00:08:59.974 "ddgst": false 00:08:59.974 }, 00:08:59.974 "method": "bdev_nvme_attach_controller" 00:08:59.974 }' 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:59.974 "params": { 00:08:59.974 "name": "Nvme1", 00:08:59.974 "trtype": "tcp", 00:08:59.974 "traddr": "10.0.0.2", 00:08:59.974 "adrfam": "ipv4", 00:08:59.974 "trsvcid": "4420", 00:08:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.974 "hdgst": false, 00:08:59.974 "ddgst": false 00:08:59.974 }, 00:08:59.974 "method": "bdev_nvme_attach_controller" 00:08:59.974 }' 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:59.974 "params": { 00:08:59.974 "name": "Nvme1", 00:08:59.974 "trtype": "tcp", 00:08:59.974 "traddr": "10.0.0.2", 00:08:59.974 "adrfam": "ipv4", 00:08:59.974 "trsvcid": "4420", 00:08:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.974 "hdgst": false, 00:08:59.974 "ddgst": false 00:08:59.974 }, 00:08:59.974 "method": "bdev_nvme_attach_controller" 00:08:59.974 }' 00:08:59.974 [2024-07-25 01:52:15.227866] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:59.974 [2024-07-25 01:52:15.227932] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:59.974 [2024-07-25 01:52:15.229131] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:59.974 [2024-07-25 01:52:15.229203] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:59.974 [2024-07-25 01:52:15.233812] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:08:59.974 [2024-07-25 01:52:15.234037] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:59.974 01:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78692 00:09:00.233 [2024-07-25 01:52:15.274685] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:00.233 [2024-07-25 01:52:15.274807] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:00.233 [2024-07-25 01:52:15.384473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.233 [2024-07-25 01:52:15.425133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.233 [2024-07-25 01:52:15.426613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.233 [2024-07-25 01:52:15.445945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.233 [2024-07-25 01:52:15.461417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:00.233 [2024-07-25 01:52:15.468776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.233 [2024-07-25 01:52:15.473337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:00.233 [2024-07-25 01:52:15.488042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.233 [2024-07-25 01:52:15.498626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.233 [2024-07-25 01:52:15.510183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.233 [2024-07-25 01:52:15.510658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:00.233 [2024-07-25 01:52:15.513178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.233 [2024-07-25 01:52:15.528941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.517 [2024-07-25 01:52:15.540131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.517 [2024-07-25 01:52:15.556256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:00.517 [2024-07-25 01:52:15.587591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.517 Running I/O for 1 seconds... 00:09:00.517 Running I/O for 1 seconds... 00:09:00.517 Running I/O for 1 seconds... 00:09:00.517 Running I/O for 1 seconds... 00:09:01.457 00:09:01.457 Latency(us) 00:09:01.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.457 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:01.457 Nvme1n1 : 1.01 9343.17 36.50 0.00 0.00 13635.44 8400.52 24903.68 00:09:01.457 =================================================================================================================== 00:09:01.458 Total : 9343.17 36.50 0.00 0.00 13635.44 8400.52 24903.68 00:09:01.458 00:09:01.458 Latency(us) 00:09:01.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.458 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:01.458 Nvme1n1 : 1.02 5404.38 21.11 0.00 0.00 23334.02 10366.60 36461.85 00:09:01.458 =================================================================================================================== 00:09:01.458 Total : 5404.38 21.11 0.00 0.00 23334.02 10366.60 36461.85 00:09:01.458 00:09:01.458 Latency(us) 00:09:01.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.458 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:01.458 Nvme1n1 : 1.01 5413.82 21.15 0.00 0.00 23565.49 5928.03 49807.36 00:09:01.458 =================================================================================================================== 00:09:01.458 Total : 5413.82 21.15 0.00 0.00 23565.49 5928.03 49807.36 00:09:01.458 00:09:01.458 Latency(us) 00:09:01.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.458 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:01.458 Nvme1n1 : 1.00 166532.39 650.52 0.00 0.00 765.79 350.02 1266.04 00:09:01.458 =================================================================================================================== 00:09:01.458 Total : 166532.39 650.52 0.00 0.00 765.79 350.02 1266.04 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78694 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78697 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78701 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.716 rmmod nvme_tcp 00:09:01.716 rmmod nvme_fabrics 00:09:01.716 rmmod nvme_keyring 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 78664 ']' 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 78664 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 78664 ']' 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 78664 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78664 00:09:01.716 killing process with pid 78664 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78664' 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 78664 00:09:01.716 01:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 78664 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:01.975 00:09:01.975 real 0m2.875s 00:09:01.975 user 0m12.674s 00:09:01.975 sys 0m1.863s 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.975 ************************************ 00:09:01.975 END TEST nvmf_bdev_io_wait 00:09:01.975 ************************************ 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.975 ************************************ 00:09:01.975 START TEST nvmf_queue_depth 00:09:01.975 ************************************ 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.975 * Looking for test storage... 00:09:01.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.975 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:02.234 Cannot find device "nvmf_tgt_br" 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:02.234 Cannot find device "nvmf_tgt_br2" 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:02.234 Cannot find device "nvmf_tgt_br" 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:02.234 Cannot find device "nvmf_tgt_br2" 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:02.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:02.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.234 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:02.235 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:02.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:09:02.494 00:09:02.494 --- 10.0.0.2 ping statistics --- 00:09:02.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.494 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:02.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:09:02.494 00:09:02.494 --- 10.0.0.3 ping statistics --- 00:09:02.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.494 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:02.494 00:09:02.494 --- 10.0.0.1 ping statistics --- 00:09:02.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.494 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=78902 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 78902 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 78902 ']' 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.494 01:52:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.494 [2024-07-25 01:52:17.684980] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:02.494 [2024-07-25 01:52:17.685523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.753 [2024-07-25 01:52:17.804682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.753 [2024-07-25 01:52:17.828057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.753 [2024-07-25 01:52:17.869077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.753 [2024-07-25 01:52:17.869137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.753 [2024-07-25 01:52:17.869151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.753 [2024-07-25 01:52:17.869161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.753 [2024-07-25 01:52:17.869170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.753 [2024-07-25 01:52:17.869203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.753 [2024-07-25 01:52:17.902295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.319 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.319 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:03.319 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.319 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.319 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 [2024-07-25 01:52:18.643809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 Malloc0 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 [2024-07-25 01:52:18.707470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78934 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78934 /var/tmp/bdevperf.sock 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 78934 ']' 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.577 01:52:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.577 [2024-07-25 01:52:18.766334] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:03.577 [2024-07-25 01:52:18.766438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78934 ] 00:09:03.834 [2024-07-25 01:52:18.888844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:03.834 [2024-07-25 01:52:18.907888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.834 [2024-07-25 01:52:18.940489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.834 [2024-07-25 01:52:18.969788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.834 NVMe0n1 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.834 01:52:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.092 Running I/O for 10 seconds... 00:09:14.059 00:09:14.059 Latency(us) 00:09:14.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.059 Verification LBA range: start 0x0 length 0x4000 00:09:14.059 NVMe0n1 : 10.09 9234.75 36.07 0.00 0.00 110398.72 25976.09 86745.83 00:09:14.059 =================================================================================================================== 00:09:14.059 Total : 9234.75 36.07 0.00 0.00 110398.72 25976.09 86745.83 00:09:14.059 0 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78934 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 78934 ']' 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 78934 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78934 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.059 killing process with pid 78934 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78934' 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 78934 00:09:14.059 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.059 00:09:14.059 Latency(us) 00:09:14.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.059 =================================================================================================================== 00:09:14.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.059 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 78934 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.319 rmmod nvme_tcp 00:09:14.319 rmmod nvme_fabrics 00:09:14.319 rmmod nvme_keyring 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 78902 ']' 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 78902 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 78902 ']' 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 78902 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78902 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:14.319 killing process with pid 78902 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78902' 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 78902 00:09:14.319 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 78902 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.578 ************************************ 00:09:14.578 END TEST nvmf_queue_depth 00:09:14.578 ************************************ 00:09:14.578 00:09:14.578 real 0m12.545s 00:09:14.578 user 0m21.559s 00:09:14.578 sys 0m1.936s 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.578 01:52:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.579 01:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.579 01:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.579 01:52:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.579 ************************************ 00:09:14.579 START TEST nvmf_target_multipath 00:09:14.579 ************************************ 00:09:14.579 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.579 * Looking for test storage... 00:09:14.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.579 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.838 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:14.839 Cannot find device "nvmf_tgt_br" 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.839 Cannot find device "nvmf_tgt_br2" 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:14.839 Cannot find device "nvmf_tgt_br" 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:14.839 Cannot find device "nvmf_tgt_br2" 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:14.839 01:52:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.839 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:15.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:15.100 00:09:15.100 --- 10.0.0.2 ping statistics --- 00:09:15.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.100 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:15.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:15.100 00:09:15.100 --- 10.0.0.3 ping statistics --- 00:09:15.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.100 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:15.100 00:09:15.100 --- 10.0.0.1 ping statistics --- 00:09:15.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.100 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=79243 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 79243 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 79243 ']' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.100 01:52:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.100 [2024-07-25 01:52:30.311713] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:15.100 [2024-07-25 01:52:30.311794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.360 [2024-07-25 01:52:30.438015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.360 [2024-07-25 01:52:30.452278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.360 [2024-07-25 01:52:30.486781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.360 [2024-07-25 01:52:30.486825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.360 [2024-07-25 01:52:30.486834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.360 [2024-07-25 01:52:30.486870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.360 [2024-07-25 01:52:30.486877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.360 [2024-07-25 01:52:30.487022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.360 [2024-07-25 01:52:30.487459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.360 [2024-07-25 01:52:30.488144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.360 [2024-07-25 01:52:30.488159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.360 [2024-07-25 01:52:30.516134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.296 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.553 [2024-07-25 01:52:31.623660] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.553 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:16.811 Malloc0 00:09:16.812 01:52:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:17.070 01:52:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.328 01:52:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.328 [2024-07-25 01:52:32.614327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.586 01:52:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:17.586 [2024-07-25 01:52:32.826473] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:17.586 01:52:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:17.844 01:52:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:17.844 01:52:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.844 01:52:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:17.844 01:52:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.844 01:52:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:17.844 01:52:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=79332 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:20.370 01:52:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:20.370 [global] 00:09:20.370 thread=1 00:09:20.370 invalidate=1 00:09:20.370 rw=randrw 00:09:20.370 time_based=1 00:09:20.370 runtime=6 00:09:20.370 ioengine=libaio 00:09:20.370 direct=1 00:09:20.370 bs=4096 00:09:20.371 iodepth=128 00:09:20.371 norandommap=0 00:09:20.371 numjobs=1 00:09:20.371 00:09:20.371 verify_dump=1 00:09:20.371 verify_backlog=512 00:09:20.371 verify_state_save=0 00:09:20.371 do_verify=1 00:09:20.371 verify=crc32c-intel 00:09:20.371 [job0] 00:09:20.371 filename=/dev/nvme0n1 00:09:20.371 Could not set queue depth (nvme0n1) 00:09:20.371 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.371 fio-3.35 00:09:20.371 Starting 1 thread 00:09:20.943 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:21.209 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:21.467 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:21.467 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:21.467 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.467 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.468 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:21.726 01:52:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.985 01:52:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 79332 00:09:26.173 00:09:26.173 job0: (groupid=0, jobs=1): err= 0: pid=79359: Thu Jul 25 01:52:41 2024 00:09:26.173 read: IOPS=10.8k, BW=42.3MiB/s (44.4MB/s)(254MiB/6006msec) 00:09:26.173 slat (usec): min=6, max=8157, avg=54.09, stdev=209.98 00:09:26.173 clat (usec): min=1754, max=16538, avg=7986.94, stdev=1405.92 00:09:26.173 lat (usec): min=1765, max=16572, avg=8041.03, stdev=1409.90 00:09:26.173 clat percentiles (usec): 00:09:26.173 | 1.00th=[ 4178], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7242], 00:09:26.173 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8029], 00:09:26.173 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11207], 00:09:26.173 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13960], 99.95th=[14091], 00:09:26.173 | 99.99th=[14746] 00:09:26.173 bw ( KiB/s): min=11848, max=28248, per=51.95%, avg=22522.18, stdev=5615.09, samples=11 00:09:26.173 iops : min= 2962, max= 7062, avg=5630.55, stdev=1403.77, samples=11 00:09:26.173 write: IOPS=6580, BW=25.7MiB/s (27.0MB/s)(135MiB/5260msec); 0 zone resets 00:09:26.173 slat (usec): min=15, max=4621, avg=61.77, stdev=151.92 00:09:26.173 clat (usec): min=1815, max=14247, avg=6991.52, stdev=1254.25 00:09:26.173 lat (usec): min=1839, max=14279, avg=7053.28, stdev=1259.20 00:09:26.173 clat percentiles (usec): 00:09:26.173 | 1.00th=[ 3163], 5.00th=[ 4178], 10.00th=[ 5735], 20.00th=[ 6456], 00:09:26.173 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:09:26.173 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8094], 95.00th=[ 8455], 00:09:26.173 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12649], 99.95th=[13435], 00:09:26.173 | 99.99th=[14091] 00:09:26.173 bw ( KiB/s): min=12000, max=27792, per=85.63%, avg=22539.91, stdev=5286.00, samples=11 00:09:26.173 iops : min= 3000, max= 6948, avg=5634.91, stdev=1321.47, samples=11 00:09:26.173 lat (msec) : 2=0.04%, 4=1.92%, 10=92.45%, 20=5.59% 00:09:26.173 cpu : usr=5.30%, sys=22.35%, ctx=5844, majf=0, minf=151 00:09:26.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:26.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.173 issued rwts: total=65100,34612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.173 00:09:26.173 Run status group 0 (all jobs): 00:09:26.173 READ: bw=42.3MiB/s (44.4MB/s), 42.3MiB/s-42.3MiB/s (44.4MB/s-44.4MB/s), io=254MiB (267MB), run=6006-6006msec 00:09:26.173 WRITE: bw=25.7MiB/s (27.0MB/s), 25.7MiB/s-25.7MiB/s (27.0MB/s-27.0MB/s), io=135MiB (142MB), run=5260-5260msec 00:09:26.173 00:09:26.173 Disk stats (read/write): 00:09:26.173 nvme0n1: ios=64432/33754, merge=0/0, ticks=493204/220909, in_queue=714113, util=98.65% 00:09:26.173 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:26.741 01:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=79433 00:09:26.741 01:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:26.741 01:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:26.741 [global] 00:09:26.741 thread=1 00:09:26.741 invalidate=1 00:09:26.741 rw=randrw 00:09:26.741 time_based=1 00:09:26.741 runtime=6 00:09:26.741 ioengine=libaio 00:09:26.741 direct=1 00:09:26.741 bs=4096 00:09:26.741 iodepth=128 00:09:26.741 norandommap=0 00:09:26.741 numjobs=1 00:09:26.741 00:09:26.741 verify_dump=1 00:09:26.741 verify_backlog=512 00:09:26.741 verify_state_save=0 00:09:26.741 do_verify=1 00:09:26.741 verify=crc32c-intel 00:09:26.741 [job0] 00:09:26.741 filename=/dev/nvme0n1 00:09:26.999 Could not set queue depth (nvme0n1) 00:09:26.999 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.999 fio-3.35 00:09:26.999 Starting 1 thread 00:09:27.935 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:28.202 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:28.461 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:28.719 01:52:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 79433 00:09:33.987 00:09:33.987 job0: (groupid=0, jobs=1): err= 0: pid=79458: Thu Jul 25 01:52:48 2024 00:09:33.987 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(268MiB/6007msec) 00:09:33.987 slat (usec): min=6, max=7575, avg=43.17, stdev=186.57 00:09:33.987 clat (usec): min=313, max=15630, avg=7590.48, stdev=1960.55 00:09:33.987 lat (usec): min=347, max=15664, avg=7633.65, stdev=1975.76 00:09:33.987 clat percentiles (usec): 00:09:33.987 | 1.00th=[ 2933], 5.00th=[ 4146], 10.00th=[ 4752], 20.00th=[ 5997], 00:09:33.987 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8160], 00:09:33.987 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[11076], 00:09:33.987 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13829], 99.95th=[14091], 00:09:33.987 | 99.99th=[14484] 00:09:33.987 bw ( KiB/s): min=10980, max=39080, per=54.20%, avg=24758.18, stdev=8635.26, samples=11 00:09:33.987 iops : min= 2745, max= 9770, avg=6189.55, stdev=2158.81, samples=11 00:09:33.987 write: IOPS=6849, BW=26.8MiB/s (28.1MB/s)(145MiB/5415msec); 0 zone resets 00:09:33.987 slat (usec): min=15, max=2849, avg=55.38, stdev=133.55 00:09:33.987 clat (usec): min=884, max=14608, avg=6522.91, stdev=1807.88 00:09:33.987 lat (usec): min=939, max=14634, avg=6578.29, stdev=1821.74 00:09:33.987 clat percentiles (usec): 00:09:33.987 | 1.00th=[ 2573], 5.00th=[ 3359], 10.00th=[ 3785], 20.00th=[ 4490], 00:09:33.987 | 30.00th=[ 5604], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:33.987 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8291], 95.00th=[ 8586], 00:09:33.987 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12518], 99.95th=[13698], 00:09:33.987 | 99.99th=[14484] 00:09:33.987 bw ( KiB/s): min=11577, max=39608, per=90.33%, avg=24747.73, stdev=8437.03, samples=11 00:09:33.987 iops : min= 2894, max= 9902, avg=6186.91, stdev=2109.30, samples=11 00:09:33.987 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:09:33.987 lat (msec) : 2=0.25%, 4=7.06%, 10=87.78%, 20=4.82% 00:09:33.987 cpu : usr=5.99%, sys=24.06%, ctx=6030, majf=0, minf=108 00:09:33.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:33.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.987 issued rwts: total=68601,37088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.987 00:09:33.987 Run status group 0 (all jobs): 00:09:33.987 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6007-6007msec 00:09:33.987 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=145MiB (152MB), run=5415-5415msec 00:09:33.987 00:09:33.987 Disk stats (read/write): 00:09:33.987 nvme0n1: ios=67998/36225, merge=0/0, ticks=492443/218790, in_queue=711233, util=98.60% 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.987 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.988 rmmod nvme_tcp 00:09:33.988 rmmod nvme_fabrics 00:09:33.988 rmmod nvme_keyring 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 79243 ']' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 79243 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 79243 ']' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 79243 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79243 00:09:33.988 killing process with pid 79243 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79243' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 79243 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 79243 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:33.988 00:09:33.988 real 0m19.186s 00:09:33.988 user 1m11.640s 00:09:33.988 sys 0m10.259s 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.988 01:52:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 ************************************ 00:09:33.988 END TEST nvmf_target_multipath 00:09:33.988 ************************************ 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 ************************************ 00:09:33.988 START TEST nvmf_zcopy 00:09:33.988 ************************************ 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.988 * Looking for test storage... 00:09:33.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:33.988 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:33.989 Cannot find device "nvmf_tgt_br" 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.989 Cannot find device "nvmf_tgt_br2" 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:33.989 Cannot find device "nvmf_tgt_br" 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:33.989 Cannot find device "nvmf_tgt_br2" 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:33.989 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:34.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:34.248 00:09:34.248 --- 10.0.0.2 ping statistics --- 00:09:34.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.248 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:34.248 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:34.248 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.248 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:34.248 00:09:34.248 --- 10.0.0.3 ping statistics --- 00:09:34.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.248 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:34.249 00:09:34.249 --- 10.0.0.1 ping statistics --- 00:09:34.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.249 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=79700 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 79700 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 79700 ']' 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.249 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.508 [2024-07-25 01:52:49.558771] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:34.508 [2024-07-25 01:52:49.558893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.508 [2024-07-25 01:52:49.682184] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:34.508 [2024-07-25 01:52:49.697983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.508 [2024-07-25 01:52:49.731339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.508 [2024-07-25 01:52:49.731400] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.508 [2024-07-25 01:52:49.731410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.508 [2024-07-25 01:52:49.731417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.508 [2024-07-25 01:52:49.731423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.508 [2024-07-25 01:52:49.731447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.508 [2024-07-25 01:52:49.760193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 [2024-07-25 01:52:49.859099] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 [2024-07-25 01:52:49.875148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 malloc0 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.767 { 00:09:34.767 "params": { 00:09:34.767 "name": "Nvme$subsystem", 00:09:34.767 "trtype": "$TEST_TRANSPORT", 00:09:34.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.767 "adrfam": "ipv4", 00:09:34.767 "trsvcid": "$NVMF_PORT", 00:09:34.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.767 "hdgst": ${hdgst:-false}, 00:09:34.767 "ddgst": ${ddgst:-false} 00:09:34.767 }, 00:09:34.767 "method": "bdev_nvme_attach_controller" 00:09:34.767 } 00:09:34.767 EOF 00:09:34.767 )") 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:34.767 01:52:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.767 "params": { 00:09:34.767 "name": "Nvme1", 00:09:34.767 "trtype": "tcp", 00:09:34.767 "traddr": "10.0.0.2", 00:09:34.767 "adrfam": "ipv4", 00:09:34.767 "trsvcid": "4420", 00:09:34.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.767 "hdgst": false, 00:09:34.767 "ddgst": false 00:09:34.767 }, 00:09:34.767 "method": "bdev_nvme_attach_controller" 00:09:34.767 }' 00:09:34.767 [2024-07-25 01:52:49.965527] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:34.767 [2024-07-25 01:52:49.965616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79731 ] 00:09:35.025 [2024-07-25 01:52:50.087508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:35.025 [2024-07-25 01:52:50.105367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.026 [2024-07-25 01:52:50.137170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.026 [2024-07-25 01:52:50.172138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.026 Running I/O for 10 seconds... 00:09:44.992 00:09:44.992 Latency(us) 00:09:44.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.992 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:44.992 Verification LBA range: start 0x0 length 0x1000 00:09:44.992 Nvme1n1 : 10.01 6278.37 49.05 0.00 0.00 20323.48 3038.49 31457.28 00:09:44.992 =================================================================================================================== 00:09:44.992 Total : 6278.37 49.05 0.00 0.00 20323.48 3038.49 31457.28 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79847 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:45.250 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:45.250 { 00:09:45.251 "params": { 00:09:45.251 "name": "Nvme$subsystem", 00:09:45.251 "trtype": "$TEST_TRANSPORT", 00:09:45.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.251 "adrfam": "ipv4", 00:09:45.251 "trsvcid": "$NVMF_PORT", 00:09:45.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.251 "hdgst": ${hdgst:-false}, 00:09:45.251 "ddgst": ${ddgst:-false} 00:09:45.251 }, 00:09:45.251 "method": "bdev_nvme_attach_controller" 00:09:45.251 } 00:09:45.251 EOF 00:09:45.251 )") 00:09:45.251 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:45.251 [2024-07-25 01:53:00.432328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.432400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:45.251 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:45.251 01:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:45.251 "params": { 00:09:45.251 "name": "Nvme1", 00:09:45.251 "trtype": "tcp", 00:09:45.251 "traddr": "10.0.0.2", 00:09:45.251 "adrfam": "ipv4", 00:09:45.251 "trsvcid": "4420", 00:09:45.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.251 "hdgst": false, 00:09:45.251 "ddgst": false 00:09:45.251 }, 00:09:45.251 "method": "bdev_nvme_attach_controller" 00:09:45.251 }' 00:09:45.251 [2024-07-25 01:53:00.444315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.444339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.456337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.456377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.464333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.464371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.466749] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:45.251 [2024-07-25 01:53:00.466819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79847 ] 00:09:45.251 [2024-07-25 01:53:00.472326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.472347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.480328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.480366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.488342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.488362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.500327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.500363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.512346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.512386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.524349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.524392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.536346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.536391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.251 [2024-07-25 01:53:00.548349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.251 [2024-07-25 01:53:00.548393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.560349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.560392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.572346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.572384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.584346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.584383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.585364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:45.510 [2024-07-25 01:53:00.596369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.596391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.600190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.510 [2024-07-25 01:53:00.608384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.608431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.620409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.620461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.632389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.632436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.635244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.510 [2024-07-25 01:53:00.644369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.644391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.656403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.656451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.668399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.668448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.671779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.510 [2024-07-25 01:53:00.680403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.680452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.692390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.692430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.704405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.704450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.716403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.716446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.728414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.728457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.740429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.740473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.752441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.752483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.764449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.764493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 Running I/O for 5 seconds... 00:09:45.510 [2024-07-25 01:53:00.776449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.776489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.510 [2024-07-25 01:53:00.793432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.510 [2024-07-25 01:53:00.793464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.811481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.811580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.826907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.826953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.841186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.841234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.857845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.857918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.873735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.873781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.891472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.891519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.907649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.907696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.925356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.925401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.941466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.941512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.958725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.958770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.974623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.974686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:00.992292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:00.992321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:01.006707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:01.006754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:01.022227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:01.022256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:01.040941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:01.041000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.769 [2024-07-25 01:53:01.055957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.769 [2024-07-25 01:53:01.056002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.027 [2024-07-25 01:53:01.072513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.027 [2024-07-25 01:53:01.072563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.027 [2024-07-25 01:53:01.090093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.027 [2024-07-25 01:53:01.090139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.027 [2024-07-25 01:53:01.104671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.104718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.120477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.120523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.136577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.136624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.154599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.154644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.170577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.170623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.187082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.187127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.206286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.206335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.219637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.219686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.235117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.235163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.253306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.253352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.269566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.269611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.286443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.286488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.303647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.303693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.028 [2024-07-25 01:53:01.320545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.028 [2024-07-25 01:53:01.320590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.286 [2024-07-25 01:53:01.336051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-07-25 01:53:01.336099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.351673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.351704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.367951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.367986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.384278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.384304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.401229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.401256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.417278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.417305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.434466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.434511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.450741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.450787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.468068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.468113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.484466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.484511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.500660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.500705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.517504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.517550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.533473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.533518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.552458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.552504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.566492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.566538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.287 [2024-07-25 01:53:01.581797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.287 [2024-07-25 01:53:01.581847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.591593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.591645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.607992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.608066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.624961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.625008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.642476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.642524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.657279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.657327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.672743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.672790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.690457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.690505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.705951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.706029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.716681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.716729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.731301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.731349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.746923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.746994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.762959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.763021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.773017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.773063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.788518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.788566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.799764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.799797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.815807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.815853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.831148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.831180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.546 [2024-07-25 01:53:01.842429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.546 [2024-07-25 01:53:01.842465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.855324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.855385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.871123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.871171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.887449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.887498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.903262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.903309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.921030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.921077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.936345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.936409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.805 [2024-07-25 01:53:01.946504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.805 [2024-07-25 01:53:01.946553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:01.961821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:01.961878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:01.972641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:01.972675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:01.988005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:01.988039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.003489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.003564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.019120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.019168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.029007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.029069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.044760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.044794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.061972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.062004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.079092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.079123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.806 [2024-07-25 01:53:02.096038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.806 [2024-07-25 01:53:02.096070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.113849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.113910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.128822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.128895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.144670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.144717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.162152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.162199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.177251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.177303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.187399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.187447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.201892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.201948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.220419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.220450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.234725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.234771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.251152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.251200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.267977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.268023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.282567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.282598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.299825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.299899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.316189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.316219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.332339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.332367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.065 [2024-07-25 01:53:02.350079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.065 [2024-07-25 01:53:02.350108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.366798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.366861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.384799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.384830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.398720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.398749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.413930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.413959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.422915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.422942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.438697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.438738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.454258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.454299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.463420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.463449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.479271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.479297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.497838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.497913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.511965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.511991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.528264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.528293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.544850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.544889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.560297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.560327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.577714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.577743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.594392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.594422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.324 [2024-07-25 01:53:02.610263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.324 [2024-07-25 01:53:02.610292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.630523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.630553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.644631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.644664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.659698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.659730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.668817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.668873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.683823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.683864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.698645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.698675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.713583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.713628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.722763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.722808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.739886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.739927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.758011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.758040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.772485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.583 [2024-07-25 01:53:02.772515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.583 [2024-07-25 01:53:02.783958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.783988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.584 [2024-07-25 01:53:02.801395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.801444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.584 [2024-07-25 01:53:02.815686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.815721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.584 [2024-07-25 01:53:02.831187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.831237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.584 [2024-07-25 01:53:02.841249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.841279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.584 [2024-07-25 01:53:02.857135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.857167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.584 [2024-07-25 01:53:02.874379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.584 [2024-07-25 01:53:02.874408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.890326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.842 [2024-07-25 01:53:02.890358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.901659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.842 [2024-07-25 01:53:02.901688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.918434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.842 [2024-07-25 01:53:02.918465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.934476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.842 [2024-07-25 01:53:02.934506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.950399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.842 [2024-07-25 01:53:02.950428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.961915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.842 [2024-07-25 01:53:02.961943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.842 [2024-07-25 01:53:02.978749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:02.978798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:02.993382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:02.993414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.009530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.009578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.020744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.020808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.036250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.036292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.052364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.052412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.068101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.068181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.085063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.085126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.100429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.100480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.110197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.110229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.125779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.125828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.843 [2024-07-25 01:53:03.136633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.843 [2024-07-25 01:53:03.136668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.152292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.152324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.168502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.168536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.186227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.186274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.201221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.201284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.217983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.218030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.233551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.233583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.250102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.250131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.267587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.267620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.283997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.284044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.301256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.301287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.316274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.316322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.331836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.331913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.342624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.342688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.358041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.358085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.373534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.373578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.383762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.383796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.102 [2024-07-25 01:53:03.399713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.102 [2024-07-25 01:53:03.399749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.414533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.414583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.429428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.429463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.446148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.446195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.462458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.462488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.479428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.479460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.495502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.495557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.512860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.512933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.528347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.528415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.545291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.545318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.562646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.562676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.578535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.578565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-07-25 01:53:03.594647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-07-25 01:53:03.594691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-07-25 01:53:03.604906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-07-25 01:53:03.604945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-07-25 01:53:03.620762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-07-25 01:53:03.620793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-07-25 01:53:03.638285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-07-25 01:53:03.638320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.382 [2024-07-25 01:53:03.654326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.382 [2024-07-25 01:53:03.654391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.671981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.672013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.687473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.687539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.706853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.706917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.721490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.721520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.738049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.738099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.754224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.754270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.765933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.765975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.781373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.781420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.792270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.792302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.807850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.807894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.823052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.823082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.834474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.834506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.848123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.848156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.863778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.863812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.879422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.879472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.889838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.889903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.906936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.907018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.639 [2024-07-25 01:53:03.921266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.639 [2024-07-25 01:53:03.921326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:03.937885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:03.937946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:03.953638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:03.953670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:03.968964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:03.969073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:03.986470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:03.986503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.003418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.003450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.019012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.019044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.029955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.030002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.044642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.044689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.059748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.059783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.070836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.070911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.086089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.086120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.102267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.102319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.113646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.113710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.129604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.129636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.144422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.144485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.160217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.160267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.177107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.177158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-25 01:53:04.193639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-25 01:53:04.193676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.155 [2024-07-25 01:53:04.210029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.155 [2024-07-25 01:53:04.210060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.155 [2024-07-25 01:53:04.227138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.155 [2024-07-25 01:53:04.227171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.155 [2024-07-25 01:53:04.243584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.155 [2024-07-25 01:53:04.243618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.155 [2024-07-25 01:53:04.260933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.155 [2024-07-25 01:53:04.261044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.155 [2024-07-25 01:53:04.276110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.155 [2024-07-25 01:53:04.276158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.155 [2024-07-25 01:53:04.292112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.292143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.309937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.309994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.326817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.326920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.342082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.342113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.358749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.358781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.374889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.374947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.391385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.391418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.407891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.407988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.425434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.425469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.156 [2024-07-25 01:53:04.443227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.156 [2024-07-25 01:53:04.443274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.459896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.459944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.477099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.477130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.492060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.492108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.509034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.509077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.525877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.525937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.542087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.542119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.561256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.561286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.576288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.576319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.586792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.414 [2024-07-25 01:53:04.586825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.414 [2024-07-25 01:53:04.603369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.603416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.415 [2024-07-25 01:53:04.617308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.617355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.415 [2024-07-25 01:53:04.633524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.633550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.415 [2024-07-25 01:53:04.651251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.651281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.415 [2024-07-25 01:53:04.668130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.668161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.415 [2024-07-25 01:53:04.684640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.684673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.415 [2024-07-25 01:53:04.702875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.415 [2024-07-25 01:53:04.702916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.719721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.719757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.735601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.735635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.752815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.752905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.769206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.769253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.784948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.785007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.795729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.795762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.811419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.811468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.673 [2024-07-25 01:53:04.828159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.673 [2024-07-25 01:53:04.828237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.845462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.845507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.860885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.860981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.877448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.877483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.894525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.894573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.910613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.910661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.930410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.930463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.945583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.945630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.674 [2024-07-25 01:53:04.962737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.674 [2024-07-25 01:53:04.962787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:04.978683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:04.978734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:04.998048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:04.998096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.013214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.013288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.023891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.023947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.039582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.039616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.053973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.054021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.071125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.071174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.087193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.087289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.097408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.097457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.112942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.113027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.127652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.932 [2024-07-25 01:53:05.127686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.932 [2024-07-25 01:53:05.144887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.933 [2024-07-25 01:53:05.144961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.933 [2024-07-25 01:53:05.161495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.933 [2024-07-25 01:53:05.161528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.933 [2024-07-25 01:53:05.181287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.933 [2024-07-25 01:53:05.181351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.933 [2024-07-25 01:53:05.196745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.933 [2024-07-25 01:53:05.196796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.933 [2024-07-25 01:53:05.206836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.933 [2024-07-25 01:53:05.206928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.933 [2024-07-25 01:53:05.223164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.933 [2024-07-25 01:53:05.223230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.238636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.238686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.255739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.255774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.273057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.273105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.287731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.287765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.303768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.303803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.322681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.322743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.337376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.337438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.353047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.353094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.363699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.363733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.380136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.380166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.396683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.396731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.412955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.413012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.191 [2024-07-25 01:53:05.429969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.191 [2024-07-25 01:53:05.430058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.192 [2024-07-25 01:53:05.445823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.192 [2024-07-25 01:53:05.445911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.192 [2024-07-25 01:53:05.462725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.192 [2024-07-25 01:53:05.462773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.192 [2024-07-25 01:53:05.480308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.192 [2024-07-25 01:53:05.480356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.495168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.495203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.512518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.512568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.529922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.529984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.545496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.545546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.562109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.562156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.578271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.578333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.589476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.589523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.604610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.604657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.619361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.619409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.636591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.636641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.652931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.652976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.672764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.672812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.689304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.689352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.703701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.703749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.720352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.720397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.450 [2024-07-25 01:53:05.737875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.450 [2024-07-25 01:53:05.737916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.752867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.752916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.769355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.769385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.780784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.780826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 00:09:50.710 Latency(us) 00:09:50.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.710 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:50.710 Nvme1n1 : 5.01 11178.76 87.33 0.00 0.00 11436.88 4259.84 19899.11 00:09:50.710 =================================================================================================================== 00:09:50.710 Total : 11178.76 87.33 0.00 0.00 11436.88 4259.84 19899.11 00:09:50.710 [2024-07-25 01:53:05.792726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.792804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.804778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.804834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.816786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.816898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.828789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.828843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.840825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.840917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.852825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.852904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.864809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.864887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.876788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.876832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.888796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.888890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.900821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.900876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.912890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.912937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.924802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.924842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 [2024-07-25 01:53:05.936826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.710 [2024-07-25 01:53:05.936874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.710 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79847) - No such process 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79847 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.710 delay0 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.710 01:53:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:50.970 [2024-07-25 01:53:06.134506] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.529 Initializing NVMe Controllers 00:09:57.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.529 Initialization complete. Launching workers. 00:09:57.529 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 78 00:09:57.529 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 365, failed to submit 33 00:09:57.529 success 243, unsuccess 122, failed 0 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.529 rmmod nvme_tcp 00:09:57.529 rmmod nvme_fabrics 00:09:57.529 rmmod nvme_keyring 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 79700 ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 79700 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 79700 ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 79700 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79700 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79700' 00:09:57.529 killing process with pid 79700 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 79700 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 79700 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:57.529 00:09:57.529 real 0m23.492s 00:09:57.529 user 0m38.826s 00:09:57.529 sys 0m6.634s 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.529 ************************************ 00:09:57.529 END TEST nvmf_zcopy 00:09:57.529 ************************************ 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.529 ************************************ 00:09:57.529 START TEST nvmf_nmic 00:09:57.529 ************************************ 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.529 * Looking for test storage... 00:09:57.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.529 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:57.530 Cannot find device "nvmf_tgt_br" 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.530 Cannot find device "nvmf_tgt_br2" 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:57.530 Cannot find device "nvmf_tgt_br" 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:57.530 Cannot find device "nvmf_tgt_br2" 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.530 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.789 01:53:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.789 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.789 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.789 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:57.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:57.789 00:09:57.789 --- 10.0.0.2 ping statistics --- 00:09:57.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.789 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:57.789 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:57.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:57.789 00:09:57.789 --- 10.0.0.3 ping statistics --- 00:09:57.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.789 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:57.789 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:57.789 00:09:57.790 --- 10.0.0.1 ping statistics --- 00:09:57.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.790 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=80165 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 80165 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 80165 ']' 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.790 01:53:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.048 [2024-07-25 01:53:13.106812] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:09:58.048 [2024-07-25 01:53:13.106932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.048 [2024-07-25 01:53:13.234091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:58.048 [2024-07-25 01:53:13.247909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.049 [2024-07-25 01:53:13.288065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.049 [2024-07-25 01:53:13.288122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.049 [2024-07-25 01:53:13.288132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.049 [2024-07-25 01:53:13.288139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.049 [2024-07-25 01:53:13.288162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.049 [2024-07-25 01:53:13.291903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.049 [2024-07-25 01:53:13.291971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.049 [2024-07-25 01:53:13.292055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.049 [2024-07-25 01:53:13.292061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.049 [2024-07-25 01:53:13.323607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.982 [2024-07-25 01:53:14.137445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.982 Malloc0 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.982 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.983 [2024-07-25 01:53:14.193071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.983 test case1: single bdev can't be used in multiple subsystems 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.983 [2024-07-25 01:53:14.216948] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:58.983 [2024-07-25 01:53:14.216996] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:58.983 [2024-07-25 01:53:14.217006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.983 request: 00:09:58.983 { 00:09:58.983 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:58.983 "namespace": { 00:09:58.983 "bdev_name": "Malloc0", 00:09:58.983 "no_auto_visible": false 00:09:58.983 }, 00:09:58.983 "method": "nvmf_subsystem_add_ns", 00:09:58.983 "req_id": 1 00:09:58.983 } 00:09:58.983 Got JSON-RPC error response 00:09:58.983 response: 00:09:58.983 { 00:09:58.983 "code": -32602, 00:09:58.983 "message": "Invalid parameters" 00:09:58.983 } 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:58.983 Adding namespace failed - expected result. 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:58.983 test case2: host connect to nvmf target in multiple paths 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.983 [2024-07-25 01:53:14.229042] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.983 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.241 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:59.241 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.241 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.241 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.241 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.241 01:53:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:01.773 01:53:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.773 [global] 00:10:01.773 thread=1 00:10:01.773 invalidate=1 00:10:01.773 rw=write 00:10:01.773 time_based=1 00:10:01.773 runtime=1 00:10:01.773 ioengine=libaio 00:10:01.773 direct=1 00:10:01.773 bs=4096 00:10:01.773 iodepth=1 00:10:01.773 norandommap=0 00:10:01.773 numjobs=1 00:10:01.773 00:10:01.773 verify_dump=1 00:10:01.773 verify_backlog=512 00:10:01.773 verify_state_save=0 00:10:01.773 do_verify=1 00:10:01.773 verify=crc32c-intel 00:10:01.773 [job0] 00:10:01.773 filename=/dev/nvme0n1 00:10:01.773 Could not set queue depth (nvme0n1) 00:10:01.773 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.773 fio-3.35 00:10:01.773 Starting 1 thread 00:10:02.709 00:10:02.709 job0: (groupid=0, jobs=1): err= 0: pid=80257: Thu Jul 25 01:53:17 2024 00:10:02.709 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:10:02.709 slat (nsec): min=11374, max=58818, avg=16468.63, stdev=4896.47 00:10:02.709 clat (usec): min=134, max=262, avg=182.62, stdev=20.60 00:10:02.709 lat (usec): min=148, max=277, avg=199.09, stdev=21.07 00:10:02.709 clat percentiles (usec): 00:10:02.709 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:10:02.709 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:10:02.709 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 223], 00:10:02.709 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 258], 99.95th=[ 258], 00:10:02.709 | 99.99th=[ 265] 00:10:02.709 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:02.709 slat (usec): min=18, max=107, avg=24.98, stdev= 7.26 00:10:02.709 clat (usec): min=81, max=242, avg=108.60, stdev=16.22 00:10:02.709 lat (usec): min=101, max=350, avg=133.59, stdev=18.64 00:10:02.709 clat percentiles (usec): 00:10:02.709 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:10:02.709 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 106], 60.00th=[ 111], 00:10:02.709 | 70.00th=[ 115], 80.00th=[ 122], 90.00th=[ 131], 95.00th=[ 139], 00:10:02.709 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 186], 99.95th=[ 235], 00:10:02.709 | 99.99th=[ 243] 00:10:02.709 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:02.709 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:02.709 lat (usec) : 100=18.04%, 250=81.81%, 500=0.15% 00:10:02.709 cpu : usr=2.00%, sys=10.20%, ctx=5971, majf=0, minf=2 00:10:02.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.709 issued rwts: total=2898,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.709 00:10:02.709 Run status group 0 (all jobs): 00:10:02.709 READ: bw=11.3MiB/s (11.9MB/s), 11.3MiB/s-11.3MiB/s (11.9MB/s-11.9MB/s), io=11.3MiB (11.9MB), run=1001-1001msec 00:10:02.709 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:02.709 00:10:02.709 Disk stats (read/write): 00:10:02.709 nvme0n1: ios=2610/2803, merge=0/0, ticks=513/356, in_queue=869, util=91.08% 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.709 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.710 rmmod nvme_tcp 00:10:02.710 rmmod nvme_fabrics 00:10:02.710 rmmod nvme_keyring 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 80165 ']' 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 80165 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 80165 ']' 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 80165 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.710 01:53:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80165 00:10:02.968 killing process with pid 80165 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80165' 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 80165 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 80165 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:02.968 ************************************ 00:10:02.968 END TEST nvmf_nmic 00:10:02.968 ************************************ 00:10:02.968 00:10:02.968 real 0m5.643s 00:10:02.968 user 0m18.188s 00:10:02.968 sys 0m2.349s 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:02.968 01:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.228 ************************************ 00:10:03.228 START TEST nvmf_fio_target 00:10:03.228 ************************************ 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:03.228 * Looking for test storage... 00:10:03.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.228 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:03.229 Cannot find device "nvmf_tgt_br" 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.229 Cannot find device "nvmf_tgt_br2" 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:03.229 Cannot find device "nvmf_tgt_br" 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:03.229 Cannot find device "nvmf_tgt_br2" 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.229 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.488 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:03.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:03.489 00:10:03.489 --- 10.0.0.2 ping statistics --- 00:10:03.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.489 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:03.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:10:03.489 00:10:03.489 --- 10.0.0.3 ping statistics --- 00:10:03.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.489 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:10:03.489 00:10:03.489 --- 10.0.0.1 ping statistics --- 00:10:03.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.489 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=80435 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 80435 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 80435 ']' 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.489 01:53:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.748 [2024-07-25 01:53:18.801876] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:03.748 [2024-07-25 01:53:18.801963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.748 [2024-07-25 01:53:18.925037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:03.748 [2024-07-25 01:53:18.943748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.748 [2024-07-25 01:53:18.974879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.748 [2024-07-25 01:53:18.974945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.748 [2024-07-25 01:53:18.974970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.748 [2024-07-25 01:53:18.974977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.748 [2024-07-25 01:53:18.974984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.748 [2024-07-25 01:53:18.975381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.748 [2024-07-25 01:53:18.975576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.748 [2024-07-25 01:53:18.976679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.748 [2024-07-25 01:53:18.976712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.748 [2024-07-25 01:53:19.003713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.683 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:04.683 [2024-07-25 01:53:19.963367] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.943 01:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.226 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:05.226 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.491 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:05.491 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.491 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:05.491 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.750 01:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:05.750 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:06.008 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.267 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:06.267 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.525 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:06.525 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:06.784 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:06.784 01:53:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:07.043 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.302 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:07.302 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.302 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:07.302 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:07.561 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.820 [2024-07-25 01:53:22.980827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.820 01:53:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:08.079 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:08.337 01:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:10.867 01:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.867 [global] 00:10:10.867 thread=1 00:10:10.867 invalidate=1 00:10:10.867 rw=write 00:10:10.867 time_based=1 00:10:10.867 runtime=1 00:10:10.867 ioengine=libaio 00:10:10.867 direct=1 00:10:10.867 bs=4096 00:10:10.867 iodepth=1 00:10:10.867 norandommap=0 00:10:10.867 numjobs=1 00:10:10.867 00:10:10.867 verify_dump=1 00:10:10.867 verify_backlog=512 00:10:10.867 verify_state_save=0 00:10:10.867 do_verify=1 00:10:10.867 verify=crc32c-intel 00:10:10.867 [job0] 00:10:10.867 filename=/dev/nvme0n1 00:10:10.867 [job1] 00:10:10.867 filename=/dev/nvme0n2 00:10:10.867 [job2] 00:10:10.867 filename=/dev/nvme0n3 00:10:10.867 [job3] 00:10:10.867 filename=/dev/nvme0n4 00:10:10.867 Could not set queue depth (nvme0n1) 00:10:10.867 Could not set queue depth (nvme0n2) 00:10:10.867 Could not set queue depth (nvme0n3) 00:10:10.867 Could not set queue depth (nvme0n4) 00:10:10.867 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.867 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.867 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.867 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.867 fio-3.35 00:10:10.867 Starting 4 threads 00:10:11.801 00:10:11.801 job0: (groupid=0, jobs=1): err= 0: pid=80614: Thu Jul 25 01:53:26 2024 00:10:11.801 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:11.801 slat (nsec): min=10917, max=54452, avg=14227.72, stdev=3768.01 00:10:11.801 clat (usec): min=128, max=691, avg=159.45, stdev=18.67 00:10:11.801 lat (usec): min=141, max=703, avg=173.68, stdev=19.30 00:10:11.801 clat percentiles (usec): 00:10:11.801 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:11.801 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:10:11.802 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 190], 00:10:11.802 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 223], 99.95th=[ 253], 00:10:11.802 | 99.99th=[ 693] 00:10:11.802 write: IOPS=3190, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:10:11.802 slat (nsec): min=12973, max=96332, avg=22177.94, stdev=7316.54 00:10:11.802 clat (usec): min=87, max=411, avg=119.85, stdev=14.84 00:10:11.802 lat (usec): min=104, max=430, avg=142.03, stdev=16.61 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:10:11.802 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:10:11.802 | 70.00th=[ 125], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 147], 00:10:11.802 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 241], 00:10:11.802 | 99.99th=[ 412] 00:10:11.802 bw ( KiB/s): min=12288, max=12288, per=27.22%, avg=12288.00, stdev= 0.00, samples=1 00:10:11.802 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:11.802 lat (usec) : 100=1.69%, 250=98.26%, 500=0.03%, 750=0.02% 00:10:11.802 cpu : usr=2.50%, sys=8.90%, ctx=6268, majf=0, minf=7 00:10:11.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 issued rwts: total=3072,3194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.802 job1: (groupid=0, jobs=1): err= 0: pid=80615: Thu Jul 25 01:53:26 2024 00:10:11.802 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:11.802 slat (nsec): min=8335, max=43831, avg=12350.16, stdev=4034.52 00:10:11.802 clat (usec): min=136, max=1647, avg=230.30, stdev=53.13 00:10:11.802 lat (usec): min=151, max=1656, avg=242.65, stdev=51.74 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 178], 00:10:11.802 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:10:11.802 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:10:11.802 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 383], 99.95th=[ 392], 00:10:11.802 | 99.99th=[ 1647] 00:10:11.802 write: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(9.80MiB/1001msec); 0 zone resets 00:10:11.802 slat (nsec): min=12644, max=90041, avg=22377.88, stdev=5374.75 00:10:11.802 clat (usec): min=97, max=7084, avg=174.64, stdev=180.12 00:10:11.802 lat (usec): min=118, max=7112, avg=197.02, stdev=180.13 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 123], 00:10:11.802 | 30.00th=[ 135], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 184], 00:10:11.802 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 223], 00:10:11.802 | 99.00th=[ 245], 99.50th=[ 262], 99.90th=[ 3097], 99.95th=[ 3425], 00:10:11.802 | 99.99th=[ 7111] 00:10:11.802 bw ( KiB/s): min=11440, max=11440, per=25.34%, avg=11440.00, stdev= 0.00, samples=1 00:10:11.802 iops : min= 2860, max= 2860, avg=2860.00, stdev= 0.00, samples=1 00:10:11.802 lat (usec) : 100=0.09%, 250=85.67%, 500=14.00%, 750=0.04%, 1000=0.02% 00:10:11.802 lat (msec) : 2=0.09%, 4=0.07%, 10=0.02% 00:10:11.802 cpu : usr=2.40%, sys=6.00%, ctx=4559, majf=0, minf=5 00:10:11.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 issued rwts: total=2048,2510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.802 job2: (groupid=0, jobs=1): err= 0: pid=80616: Thu Jul 25 01:53:26 2024 00:10:11.802 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:11.802 slat (nsec): min=8641, max=47400, avg=13880.00, stdev=3616.15 00:10:11.802 clat (usec): min=153, max=1623, avg=231.90, stdev=47.97 00:10:11.802 lat (usec): min=168, max=1636, avg=245.78, stdev=47.15 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 190], 00:10:11.802 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:10:11.802 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:10:11.802 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 367], 99.95th=[ 396], 00:10:11.802 | 99.99th=[ 1631] 00:10:11.802 write: IOPS=2531, BW=9.89MiB/s (10.4MB/s)(9.90MiB/1001msec); 0 zone resets 00:10:11.802 slat (nsec): min=11149, max=74553, avg=20139.28, stdev=6524.67 00:10:11.802 clat (usec): min=105, max=535, avg=172.87, stdev=35.76 00:10:11.802 lat (usec): min=135, max=558, avg=193.01, stdev=32.54 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 135], 00:10:11.802 | 30.00th=[ 145], 40.00th=[ 167], 50.00th=[ 180], 60.00th=[ 188], 00:10:11.802 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 227], 00:10:11.802 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 523], 00:10:11.802 | 99.99th=[ 537] 00:10:11.802 bw ( KiB/s): min=11640, max=11640, per=25.78%, avg=11640.00, stdev= 0.00, samples=1 00:10:11.802 iops : min= 2910, max= 2910, avg=2910.00, stdev= 0.00, samples=1 00:10:11.802 lat (usec) : 250=87.17%, 500=12.77%, 750=0.04% 00:10:11.802 lat (msec) : 2=0.02% 00:10:11.802 cpu : usr=1.00%, sys=7.20%, ctx=4582, majf=0, minf=11 00:10:11.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 issued rwts: total=2048,2534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.802 job3: (groupid=0, jobs=1): err= 0: pid=80617: Thu Jul 25 01:53:26 2024 00:10:11.802 read: IOPS=2598, BW=10.2MiB/s (10.6MB/s)(10.2MiB/1002msec) 00:10:11.802 slat (nsec): min=11946, max=49525, avg=15509.74, stdev=3924.05 00:10:11.802 clat (usec): min=143, max=339, avg=176.65, stdev=17.39 00:10:11.802 lat (usec): min=156, max=355, avg=192.16, stdev=18.26 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:10:11.802 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 180], 00:10:11.802 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:10:11.802 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 293], 99.95th=[ 306], 00:10:11.802 | 99.99th=[ 338] 00:10:11.802 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:11.802 slat (nsec): min=14743, max=93153, avg=22906.05, stdev=5877.87 00:10:11.802 clat (usec): min=98, max=2096, avg=136.16, stdev=47.45 00:10:11.802 lat (usec): min=118, max=2118, avg=159.06, stdev=48.13 00:10:11.802 clat percentiles (usec): 00:10:11.802 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 123], 00:10:11.802 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:10:11.802 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 163], 00:10:11.802 | 99.00th=[ 208], 99.50th=[ 269], 99.90th=[ 725], 99.95th=[ 1057], 00:10:11.802 | 99.99th=[ 2089] 00:10:11.802 bw ( KiB/s): min=12288, max=12288, per=27.22%, avg=12288.00, stdev= 0.00, samples=2 00:10:11.802 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:11.802 lat (usec) : 100=0.04%, 250=99.51%, 500=0.37%, 750=0.04%, 1000=0.02% 00:10:11.802 lat (msec) : 2=0.02%, 4=0.02% 00:10:11.802 cpu : usr=2.20%, sys=8.69%, ctx=5678, majf=0, minf=12 00:10:11.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.802 issued rwts: total=2604,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.802 00:10:11.802 Run status group 0 (all jobs): 00:10:11.802 READ: bw=38.1MiB/s (39.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.2MiB (40.0MB), run=1001-1002msec 00:10:11.802 WRITE: bw=44.1MiB/s (46.2MB/s), 9.79MiB/s-12.5MiB/s (10.3MB/s-13.1MB/s), io=44.2MiB (46.3MB), run=1001-1002msec 00:10:11.802 00:10:11.802 Disk stats (read/write): 00:10:11.802 nvme0n1: ios=2610/2861, merge=0/0, ticks=447/377, in_queue=824, util=88.78% 00:10:11.802 nvme0n2: ios=1962/2048, merge=0/0, ticks=427/340, in_queue=767, util=87.85% 00:10:11.802 nvme0n3: ios=1945/2048, merge=0/0, ticks=461/338, in_queue=799, util=89.38% 00:10:11.802 nvme0n4: ios=2321/2560, merge=0/0, ticks=420/370, in_queue=790, util=89.73% 00:10:11.802 01:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:11.802 [global] 00:10:11.802 thread=1 00:10:11.802 invalidate=1 00:10:11.802 rw=randwrite 00:10:11.802 time_based=1 00:10:11.802 runtime=1 00:10:11.802 ioengine=libaio 00:10:11.802 direct=1 00:10:11.802 bs=4096 00:10:11.802 iodepth=1 00:10:11.803 norandommap=0 00:10:11.803 numjobs=1 00:10:11.803 00:10:11.803 verify_dump=1 00:10:11.803 verify_backlog=512 00:10:11.803 verify_state_save=0 00:10:11.803 do_verify=1 00:10:11.803 verify=crc32c-intel 00:10:11.803 [job0] 00:10:11.803 filename=/dev/nvme0n1 00:10:11.803 [job1] 00:10:11.803 filename=/dev/nvme0n2 00:10:11.803 [job2] 00:10:11.803 filename=/dev/nvme0n3 00:10:11.803 [job3] 00:10:11.803 filename=/dev/nvme0n4 00:10:11.803 Could not set queue depth (nvme0n1) 00:10:11.803 Could not set queue depth (nvme0n2) 00:10:11.803 Could not set queue depth (nvme0n3) 00:10:11.803 Could not set queue depth (nvme0n4) 00:10:12.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.061 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.061 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.061 fio-3.35 00:10:12.061 Starting 4 threads 00:10:13.435 00:10:13.435 job0: (groupid=0, jobs=1): err= 0: pid=80670: Thu Jul 25 01:53:28 2024 00:10:13.435 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:13.435 slat (nsec): min=11056, max=43241, avg=13626.76, stdev=2932.63 00:10:13.435 clat (usec): min=128, max=205, avg=154.15, stdev=12.19 00:10:13.435 lat (usec): min=141, max=222, avg=167.78, stdev=12.88 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:10:13.435 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:10:13.435 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:10:13.435 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 204], 99.95th=[ 204], 00:10:13.435 | 99.99th=[ 206] 00:10:13.435 write: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:10:13.435 slat (nsec): min=13560, max=64726, avg=20998.38, stdev=5096.35 00:10:13.435 clat (usec): min=89, max=587, avg=117.09, stdev=15.38 00:10:13.435 lat (usec): min=107, max=621, avg=138.09, stdev=16.29 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:10:13.435 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 119], 00:10:13.435 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 139], 00:10:13.435 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 429], 00:10:13.435 | 99.99th=[ 586] 00:10:13.435 bw ( KiB/s): min=13548, max=13548, per=31.48%, avg=13548.00, stdev= 0.00, samples=1 00:10:13.435 iops : min= 3387, max= 3387, avg=3387.00, stdev= 0.00, samples=1 00:10:13.435 lat (usec) : 100=2.80%, 250=97.17%, 500=0.02%, 750=0.02% 00:10:13.435 cpu : usr=3.30%, sys=8.20%, ctx=6492, majf=0, minf=11 00:10:13.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 issued rwts: total=3072,3420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.435 job1: (groupid=0, jobs=1): err= 0: pid=80671: Thu Jul 25 01:53:28 2024 00:10:13.435 read: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec) 00:10:13.435 slat (nsec): min=11213, max=58861, avg=14888.29, stdev=3846.06 00:10:13.435 clat (usec): min=130, max=226, avg=157.62, stdev=13.67 00:10:13.435 lat (usec): min=142, max=244, avg=172.51, stdev=14.51 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:13.435 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:10:13.435 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 184], 00:10:13.435 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 219], 99.95th=[ 223], 00:10:13.435 | 99.99th=[ 227] 00:10:13.435 write: IOPS=3255, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1000msec); 0 zone resets 00:10:13.435 slat (nsec): min=13838, max=76846, avg=22643.31, stdev=6078.54 00:10:13.435 clat (usec): min=92, max=465, avg=117.93, stdev=13.35 00:10:13.435 lat (usec): min=112, max=489, avg=140.57, stdev=14.33 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 109], 00:10:13.435 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 119], 00:10:13.435 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 135], 95.00th=[ 141], 00:10:13.435 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 251], 00:10:13.435 | 99.99th=[ 465] 00:10:13.435 bw ( KiB/s): min=12718, max=12718, per=29.55%, avg=12718.00, stdev= 0.00, samples=1 00:10:13.435 iops : min= 3179, max= 3179, avg=3179.00, stdev= 0.00, samples=1 00:10:13.435 lat (usec) : 100=1.41%, 250=98.56%, 500=0.03% 00:10:13.435 cpu : usr=2.70%, sys=9.00%, ctx=6327, majf=0, minf=9 00:10:13.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 issued rwts: total=3072,3255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.435 job2: (groupid=0, jobs=1): err= 0: pid=80672: Thu Jul 25 01:53:28 2024 00:10:13.435 read: IOPS=1664, BW=6657KiB/s (6817kB/s)(6664KiB/1001msec) 00:10:13.435 slat (nsec): min=11972, max=79149, avg=16276.18, stdev=3723.76 00:10:13.435 clat (usec): min=182, max=562, avg=290.38, stdev=30.49 00:10:13.435 lat (usec): min=199, max=578, avg=306.65, stdev=31.71 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:10:13.435 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:10:13.435 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:10:13.435 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 537], 99.95th=[ 562], 00:10:13.435 | 99.99th=[ 562] 00:10:13.435 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:13.435 slat (nsec): min=17714, max=83270, avg=23681.46, stdev=5458.29 00:10:13.435 clat (usec): min=75, max=2361, avg=211.35, stdev=57.92 00:10:13.435 lat (usec): min=132, max=2382, avg=235.03, stdev=58.51 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 130], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 196], 00:10:13.435 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:10:13.435 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 247], 00:10:13.435 | 99.00th=[ 351], 99.50th=[ 396], 99.90th=[ 553], 99.95th=[ 693], 00:10:13.435 | 99.99th=[ 2376] 00:10:13.435 bw ( KiB/s): min= 8175, max= 8175, per=18.99%, avg=8175.00, stdev= 0.00, samples=1 00:10:13.435 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:13.435 lat (usec) : 100=0.03%, 250=53.34%, 500=46.45%, 750=0.16% 00:10:13.435 lat (msec) : 4=0.03% 00:10:13.435 cpu : usr=1.90%, sys=5.50%, ctx=3720, majf=0, minf=17 00:10:13.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 issued rwts: total=1666,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.435 job3: (groupid=0, jobs=1): err= 0: pid=80673: Thu Jul 25 01:53:28 2024 00:10:13.435 read: IOPS=1676, BW=6705KiB/s (6866kB/s)(6712KiB/1001msec) 00:10:13.435 slat (nsec): min=11654, max=44626, avg=15543.04, stdev=3282.69 00:10:13.435 clat (usec): min=188, max=589, avg=291.43, stdev=39.16 00:10:13.435 lat (usec): min=202, max=608, avg=306.98, stdev=40.01 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:10:13.435 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:10:13.435 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 330], 00:10:13.435 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 586], 00:10:13.435 | 99.99th=[ 586] 00:10:13.435 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:13.435 slat (usec): min=17, max=175, avg=23.34, stdev= 6.62 00:10:13.435 clat (usec): min=111, max=2441, avg=209.75, stdev=58.41 00:10:13.435 lat (usec): min=132, max=2495, avg=233.09, stdev=60.60 00:10:13.435 clat percentiles (usec): 00:10:13.435 | 1.00th=[ 126], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:10:13.435 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:10:13.435 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:10:13.435 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 494], 99.95th=[ 1057], 00:10:13.435 | 99.99th=[ 2442] 00:10:13.435 bw ( KiB/s): min= 8192, max= 8192, per=19.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:13.435 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:13.435 lat (usec) : 250=54.21%, 500=45.09%, 750=0.64% 00:10:13.435 lat (msec) : 2=0.03%, 4=0.03% 00:10:13.435 cpu : usr=1.50%, sys=5.70%, ctx=3728, majf=0, minf=13 00:10:13.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.435 issued rwts: total=1678,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.435 00:10:13.435 Run status group 0 (all jobs): 00:10:13.435 READ: bw=37.0MiB/s (38.8MB/s), 6657KiB/s-12.0MiB/s (6817kB/s-12.6MB/s), io=37.1MiB (38.9MB), run=1000-1001msec 00:10:13.435 WRITE: bw=42.0MiB/s (44.1MB/s), 8184KiB/s-13.3MiB/s (8380kB/s-14.0MB/s), io=42.1MiB (44.1MB), run=1000-1001msec 00:10:13.435 00:10:13.435 Disk stats (read/write): 00:10:13.435 nvme0n1: ios=2610/3062, merge=0/0, ticks=429/380, in_queue=809, util=88.28% 00:10:13.435 nvme0n2: ios=2591/2918, merge=0/0, ticks=434/367, in_queue=801, util=88.15% 00:10:13.435 nvme0n3: ios=1536/1624, merge=0/0, ticks=462/361, in_queue=823, util=89.16% 00:10:13.435 nvme0n4: ios=1536/1662, merge=0/0, ticks=456/358, in_queue=814, util=89.52% 00:10:13.435 01:53:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:13.435 [global] 00:10:13.435 thread=1 00:10:13.435 invalidate=1 00:10:13.435 rw=write 00:10:13.435 time_based=1 00:10:13.435 runtime=1 00:10:13.435 ioengine=libaio 00:10:13.435 direct=1 00:10:13.435 bs=4096 00:10:13.435 iodepth=128 00:10:13.435 norandommap=0 00:10:13.435 numjobs=1 00:10:13.435 00:10:13.435 verify_dump=1 00:10:13.435 verify_backlog=512 00:10:13.435 verify_state_save=0 00:10:13.435 do_verify=1 00:10:13.435 verify=crc32c-intel 00:10:13.435 [job0] 00:10:13.435 filename=/dev/nvme0n1 00:10:13.435 [job1] 00:10:13.436 filename=/dev/nvme0n2 00:10:13.436 [job2] 00:10:13.436 filename=/dev/nvme0n3 00:10:13.436 [job3] 00:10:13.436 filename=/dev/nvme0n4 00:10:13.436 Could not set queue depth (nvme0n1) 00:10:13.436 Could not set queue depth (nvme0n2) 00:10:13.436 Could not set queue depth (nvme0n3) 00:10:13.436 Could not set queue depth (nvme0n4) 00:10:13.436 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.436 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.436 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.436 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.436 fio-3.35 00:10:13.436 Starting 4 threads 00:10:14.372 00:10:14.372 job0: (groupid=0, jobs=1): err= 0: pid=80737: Thu Jul 25 01:53:29 2024 00:10:14.372 read: IOPS=4844, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:10:14.372 slat (usec): min=4, max=5519, avg=100.31, stdev=440.03 00:10:14.372 clat (usec): min=2629, max=18348, avg=13041.81, stdev=1539.64 00:10:14.372 lat (usec): min=2642, max=18420, avg=13142.12, stdev=1551.29 00:10:14.372 clat percentiles (usec): 00:10:14.372 | 1.00th=[ 6521], 5.00th=[10945], 10.00th=[11863], 20.00th=[12518], 00:10:14.372 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:10:14.372 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14091], 95.00th=[15401], 00:10:14.372 | 99.00th=[16712], 99.50th=[16909], 99.90th=[18220], 99.95th=[18220], 00:10:14.372 | 99.99th=[18220] 00:10:14.372 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:14.372 slat (usec): min=10, max=5434, avg=92.13, stdev=517.70 00:10:14.372 clat (usec): min=5616, max=18599, avg=12406.18, stdev=1239.38 00:10:14.372 lat (usec): min=5639, max=18616, avg=12498.31, stdev=1328.08 00:10:14.372 clat percentiles (usec): 00:10:14.372 | 1.00th=[ 8979], 5.00th=[10683], 10.00th=[11338], 20.00th=[11731], 00:10:14.372 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:10:14.372 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[14877], 00:10:14.372 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:10:14.372 | 99.99th=[18482] 00:10:14.372 bw ( KiB/s): min=20480, max=20521, per=26.75%, avg=20500.50, stdev=28.99, samples=2 00:10:14.372 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:14.372 lat (msec) : 4=0.29%, 10=2.48%, 20=97.23% 00:10:14.372 cpu : usr=4.79%, sys=13.87%, ctx=353, majf=0, minf=6 00:10:14.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:14.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.372 issued rwts: total=4859,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.372 job1: (groupid=0, jobs=1): err= 0: pid=80738: Thu Jul 25 01:53:29 2024 00:10:14.372 read: IOPS=4868, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1003msec) 00:10:14.372 slat (usec): min=7, max=3381, avg=96.80, stdev=379.35 00:10:14.372 clat (usec): min=485, max=16225, avg=12845.28, stdev=1319.83 00:10:14.372 lat (usec): min=2305, max=16263, avg=12942.08, stdev=1352.05 00:10:14.372 clat percentiles (usec): 00:10:14.372 | 1.00th=[ 6783], 5.00th=[11207], 10.00th=[11994], 20.00th=[12387], 00:10:14.372 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:14.372 | 70.00th=[13173], 80.00th=[13304], 90.00th=[14222], 95.00th=[14746], 00:10:14.372 | 99.00th=[15139], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:10:14.372 | 99.99th=[16188] 00:10:14.372 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:14.372 slat (usec): min=10, max=4722, avg=95.29, stdev=435.97 00:10:14.372 clat (usec): min=9868, max=17004, avg=12492.75, stdev=892.77 00:10:14.372 lat (usec): min=9890, max=17055, avg=12588.04, stdev=979.59 00:10:14.372 clat percentiles (usec): 00:10:14.372 | 1.00th=[10290], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:10:14.372 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:10:14.372 | 70.00th=[12518], 80.00th=[13173], 90.00th=[13566], 95.00th=[14484], 00:10:14.372 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16450], 99.95th=[16909], 00:10:14.372 | 99.99th=[16909] 00:10:14.372 bw ( KiB/s): min=20480, max=20521, per=26.75%, avg=20500.50, stdev=28.99, samples=2 00:10:14.372 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:14.372 lat (usec) : 500=0.01% 00:10:14.372 lat (msec) : 4=0.22%, 10=0.66%, 20=99.11% 00:10:14.373 cpu : usr=4.79%, sys=14.17%, ctx=408, majf=0, minf=9 00:10:14.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:14.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.373 issued rwts: total=4883,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.373 job2: (groupid=0, jobs=1): err= 0: pid=80739: Thu Jul 25 01:53:29 2024 00:10:14.373 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:14.373 slat (usec): min=5, max=6626, avg=112.83, stdev=546.43 00:10:14.373 clat (usec): min=11092, max=18596, avg=14987.94, stdev=847.78 00:10:14.373 lat (usec): min=13836, max=18604, avg=15100.77, stdev=659.91 00:10:14.373 clat percentiles (usec): 00:10:14.373 | 1.00th=[11731], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:10:14.373 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:10:14.373 | 70.00th=[15270], 80.00th=[15270], 90.00th=[15533], 95.00th=[15664], 00:10:14.373 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:10:14.373 | 99.99th=[18482] 00:10:14.373 write: IOPS=4499, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1003msec); 0 zone resets 00:10:14.373 slat (usec): min=10, max=8390, avg=111.75, stdev=497.76 00:10:14.373 clat (usec): min=677, max=20003, avg=14438.63, stdev=1537.54 00:10:14.373 lat (usec): min=3586, max=20018, avg=14550.38, stdev=1457.35 00:10:14.373 clat percentiles (usec): 00:10:14.373 | 1.00th=[ 7570], 5.00th=[12256], 10.00th=[13829], 20.00th=[14091], 00:10:14.373 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14484], 60.00th=[14615], 00:10:14.373 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15008], 95.00th=[15401], 00:10:14.373 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:10:14.373 | 99.99th=[20055] 00:10:14.373 bw ( KiB/s): min=17194, max=17920, per=22.91%, avg=17557.00, stdev=513.36, samples=2 00:10:14.373 iops : min= 4298, max= 4480, avg=4389.00, stdev=128.69, samples=2 00:10:14.373 lat (usec) : 750=0.01% 00:10:14.373 lat (msec) : 4=0.17%, 10=0.57%, 20=99.06%, 50=0.19% 00:10:14.373 cpu : usr=4.29%, sys=12.08%, ctx=292, majf=0, minf=9 00:10:14.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:14.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.373 issued rwts: total=4096,4513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.373 job3: (groupid=0, jobs=1): err= 0: pid=80740: Thu Jul 25 01:53:29 2024 00:10:14.373 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:14.373 slat (usec): min=6, max=4329, avg=114.28, stdev=460.21 00:10:14.373 clat (usec): min=12257, max=19298, avg=15131.92, stdev=962.10 00:10:14.373 lat (usec): min=12270, max=20161, avg=15246.21, stdev=1033.02 00:10:14.373 clat percentiles (usec): 00:10:14.373 | 1.00th=[12649], 5.00th=[13960], 10.00th=[14353], 20.00th=[14615], 00:10:14.373 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:10:14.373 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16581], 95.00th=[17171], 00:10:14.373 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18482], 99.95th=[19006], 00:10:14.373 | 99.99th=[19268] 00:10:14.373 write: IOPS=4447, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1003msec); 0 zone resets 00:10:14.373 slat (usec): min=8, max=5284, avg=111.19, stdev=555.93 00:10:14.373 clat (usec): min=1941, max=19562, avg=14492.26, stdev=1614.42 00:10:14.373 lat (usec): min=2003, max=19600, avg=14603.45, stdev=1692.19 00:10:14.373 clat percentiles (usec): 00:10:14.373 | 1.00th=[ 6456], 5.00th=[13173], 10.00th=[13698], 20.00th=[13960], 00:10:14.373 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:10:14.373 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15664], 95.00th=[17171], 00:10:14.373 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19530], 00:10:14.373 | 99.99th=[19530] 00:10:14.373 bw ( KiB/s): min=17064, max=17608, per=22.62%, avg=17336.00, stdev=384.67, samples=2 00:10:14.373 iops : min= 4266, max= 4402, avg=4334.00, stdev=96.17, samples=2 00:10:14.373 lat (msec) : 2=0.01%, 4=0.26%, 10=0.49%, 20=99.24% 00:10:14.373 cpu : usr=3.89%, sys=12.77%, ctx=311, majf=0, minf=9 00:10:14.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:14.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.373 issued rwts: total=4096,4461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.373 00:10:14.373 Run status group 0 (all jobs): 00:10:14.373 READ: bw=69.8MiB/s (73.2MB/s), 16.0MiB/s-19.0MiB/s (16.7MB/s-19.9MB/s), io=70.1MiB (73.5MB), run=1003-1003msec 00:10:14.373 WRITE: bw=74.8MiB/s (78.5MB/s), 17.4MiB/s-19.9MiB/s (18.2MB/s-20.9MB/s), io=75.1MiB (78.7MB), run=1003-1003msec 00:10:14.373 00:10:14.373 Disk stats (read/write): 00:10:14.373 nvme0n1: ios=4145/4450, merge=0/0, ticks=25912/23352, in_queue=49264, util=88.06% 00:10:14.373 nvme0n2: ios=4125/4480, merge=0/0, ticks=16753/15717, in_queue=32470, util=88.32% 00:10:14.373 nvme0n3: ios=3584/3776, merge=0/0, ticks=12090/12028, in_queue=24118, util=88.85% 00:10:14.373 nvme0n4: ios=3584/3760, merge=0/0, ticks=17248/15516, in_queue=32764, util=89.73% 00:10:14.631 01:53:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:14.631 [global] 00:10:14.631 thread=1 00:10:14.631 invalidate=1 00:10:14.631 rw=randwrite 00:10:14.631 time_based=1 00:10:14.631 runtime=1 00:10:14.631 ioengine=libaio 00:10:14.631 direct=1 00:10:14.631 bs=4096 00:10:14.632 iodepth=128 00:10:14.632 norandommap=0 00:10:14.632 numjobs=1 00:10:14.632 00:10:14.632 verify_dump=1 00:10:14.632 verify_backlog=512 00:10:14.632 verify_state_save=0 00:10:14.632 do_verify=1 00:10:14.632 verify=crc32c-intel 00:10:14.632 [job0] 00:10:14.632 filename=/dev/nvme0n1 00:10:14.632 [job1] 00:10:14.632 filename=/dev/nvme0n2 00:10:14.632 [job2] 00:10:14.632 filename=/dev/nvme0n3 00:10:14.632 [job3] 00:10:14.632 filename=/dev/nvme0n4 00:10:14.632 Could not set queue depth (nvme0n1) 00:10:14.632 Could not set queue depth (nvme0n2) 00:10:14.632 Could not set queue depth (nvme0n3) 00:10:14.632 Could not set queue depth (nvme0n4) 00:10:14.632 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.632 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.632 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.632 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.632 fio-3.35 00:10:14.632 Starting 4 threads 00:10:16.009 00:10:16.009 job0: (groupid=0, jobs=1): err= 0: pid=80794: Thu Jul 25 01:53:31 2024 00:10:16.009 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:10:16.009 slat (usec): min=10, max=20459, avg=196.51, stdev=1196.83 00:10:16.009 clat (usec): min=10702, max=58898, avg=25039.23, stdev=6830.64 00:10:16.009 lat (usec): min=12281, max=58929, avg=25235.74, stdev=6898.30 00:10:16.009 clat percentiles (usec): 00:10:16.009 | 1.00th=[12387], 5.00th=[18220], 10.00th=[20579], 20.00th=[21890], 00:10:16.009 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23200], 60.00th=[24249], 00:10:16.009 | 70.00th=[24511], 80.00th=[25297], 90.00th=[36963], 95.00th=[42730], 00:10:16.009 | 99.00th=[53740], 99.50th=[54264], 99.90th=[56886], 99.95th=[58459], 00:10:16.009 | 99.99th=[58983] 00:10:16.009 write: IOPS=2288, BW=9152KiB/s (9372kB/s)(9244KiB/1010msec); 0 zone resets 00:10:16.009 slat (usec): min=14, max=13950, avg=248.96, stdev=1214.41 00:10:16.009 clat (usec): min=7819, max=79512, avg=33065.54, stdev=17042.50 00:10:16.009 lat (usec): min=7862, max=79588, avg=33314.50, stdev=17153.07 00:10:16.009 clat percentiles (usec): 00:10:16.009 | 1.00th=[14746], 5.00th=[20317], 10.00th=[20841], 20.00th=[21627], 00:10:16.009 | 30.00th=[21890], 40.00th=[22676], 50.00th=[23462], 60.00th=[26608], 00:10:16.009 | 70.00th=[33162], 80.00th=[48497], 90.00th=[64226], 95.00th=[70779], 00:10:16.009 | 99.00th=[76022], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:10:16.009 | 99.99th=[79168] 00:10:16.009 bw ( KiB/s): min= 5120, max=12312, per=13.94%, avg=8716.00, stdev=5085.51, samples=2 00:10:16.009 iops : min= 1280, max= 3078, avg=2179.00, stdev=1271.38, samples=2 00:10:16.009 lat (msec) : 10=0.28%, 20=5.55%, 50=83.34%, 100=10.83% 00:10:16.009 cpu : usr=2.87%, sys=7.04%, ctx=201, majf=0, minf=9 00:10:16.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:16.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.009 issued rwts: total=2048,2311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.009 job1: (groupid=0, jobs=1): err= 0: pid=80795: Thu Jul 25 01:53:31 2024 00:10:16.009 read: IOPS=3295, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1009msec) 00:10:16.009 slat (usec): min=7, max=16312, avg=148.71, stdev=1024.99 00:10:16.009 clat (usec): min=6257, max=39146, avg=20520.42, stdev=3959.45 00:10:16.009 lat (usec): min=11391, max=45135, avg=20669.12, stdev=4024.49 00:10:16.009 clat percentiles (usec): 00:10:16.009 | 1.00th=[12125], 5.00th=[15270], 10.00th=[15795], 20.00th=[16712], 00:10:16.009 | 30.00th=[17171], 40.00th=[18482], 50.00th=[21365], 60.00th=[22676], 00:10:16.009 | 70.00th=[22938], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:10:16.009 | 99.00th=[28181], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:10:16.009 | 99.99th=[39060] 00:10:16.009 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:10:16.009 slat (usec): min=6, max=17883, avg=133.01, stdev=899.03 00:10:16.009 clat (usec): min=6257, max=31465, avg=16613.49, stdev=5229.02 00:10:16.009 lat (usec): min=7196, max=31492, avg=16746.51, stdev=5196.88 00:10:16.009 clat percentiles (usec): 00:10:16.009 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[11338], 20.00th=[11863], 00:10:16.009 | 30.00th=[12256], 40.00th=[12911], 50.00th=[14877], 60.00th=[18220], 00:10:16.009 | 70.00th=[21365], 80.00th=[22152], 90.00th=[22676], 95.00th=[23725], 00:10:16.009 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:10:16.009 | 99.99th=[31589] 00:10:16.010 bw ( KiB/s): min=12312, max=16384, per=22.95%, avg=14348.00, stdev=2879.34, samples=2 00:10:16.010 iops : min= 3078, max= 4096, avg=3587.00, stdev=719.83, samples=2 00:10:16.010 lat (msec) : 10=1.04%, 20=52.22%, 50=46.74% 00:10:16.010 cpu : usr=3.08%, sys=10.32%, ctx=185, majf=0, minf=15 00:10:16.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:16.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.010 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.010 job2: (groupid=0, jobs=1): err= 0: pid=80796: Thu Jul 25 01:53:31 2024 00:10:16.010 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:16.010 slat (usec): min=7, max=3495, avg=103.42, stdev=498.74 00:10:16.010 clat (usec): min=10034, max=15645, avg=13826.99, stdev=780.69 00:10:16.010 lat (usec): min=12572, max=15667, avg=13930.42, stdev=608.27 00:10:16.010 clat percentiles (usec): 00:10:16.010 | 1.00th=[10814], 5.00th=[12780], 10.00th=[13042], 20.00th=[13304], 00:10:16.010 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:10:16.010 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14746], 95.00th=[14877], 00:10:16.010 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15664], 99.95th=[15664], 00:10:16.010 | 99.99th=[15664] 00:10:16.010 write: IOPS=4754, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1003msec); 0 zone resets 00:10:16.010 slat (usec): min=10, max=5087, avg=102.35, stdev=444.93 00:10:16.010 clat (usec): min=293, max=16655, avg=13212.05, stdev=1393.67 00:10:16.010 lat (usec): min=3008, max=16682, avg=13314.40, stdev=1324.07 00:10:16.010 clat percentiles (usec): 00:10:16.010 | 1.00th=[ 6587], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:10:16.010 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:10:16.010 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14615], 00:10:16.010 | 99.00th=[16450], 99.50th=[16581], 99.90th=[16581], 99.95th=[16581], 00:10:16.010 | 99.99th=[16712] 00:10:16.010 bw ( KiB/s): min=18176, max=18952, per=29.70%, avg=18564.00, stdev=548.71, samples=2 00:10:16.010 iops : min= 4544, max= 4738, avg=4641.00, stdev=137.18, samples=2 00:10:16.010 lat (usec) : 500=0.01% 00:10:16.010 lat (msec) : 4=0.34%, 10=0.79%, 20=98.86% 00:10:16.010 cpu : usr=4.49%, sys=12.38%, ctx=310, majf=0, minf=6 00:10:16.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:16.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.010 issued rwts: total=4608,4769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.010 job3: (groupid=0, jobs=1): err= 0: pid=80797: Thu Jul 25 01:53:31 2024 00:10:16.010 read: IOPS=4910, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1005msec) 00:10:16.010 slat (usec): min=5, max=6481, avg=95.29, stdev=588.30 00:10:16.010 clat (usec): min=1738, max=22142, avg=13275.21, stdev=1601.71 00:10:16.010 lat (usec): min=6537, max=25811, avg=13370.49, stdev=1621.56 00:10:16.010 clat percentiles (usec): 00:10:16.010 | 1.00th=[ 7373], 5.00th=[10028], 10.00th=[12387], 20.00th=[12780], 00:10:16.010 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:10:16.010 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14615], 00:10:16.010 | 99.00th=[20055], 99.50th=[20579], 99.90th=[22152], 99.95th=[22152], 00:10:16.010 | 99.99th=[22152] 00:10:16.010 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:16.010 slat (usec): min=8, max=7355, avg=95.60, stdev=545.71 00:10:16.010 clat (usec): min=6357, max=17101, avg=12049.34, stdev=1114.39 00:10:16.010 lat (usec): min=8186, max=17259, avg=12144.93, stdev=1002.01 00:10:16.010 clat percentiles (usec): 00:10:16.010 | 1.00th=[ 8160], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:10:16.010 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:10:16.010 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13829], 00:10:16.010 | 99.00th=[15270], 99.50th=[15533], 99.90th=[17171], 99.95th=[17171], 00:10:16.010 | 99.99th=[17171] 00:10:16.010 bw ( KiB/s): min=20480, max=20521, per=32.79%, avg=20500.50, stdev=28.99, samples=2 00:10:16.010 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:16.010 lat (msec) : 2=0.01%, 10=4.02%, 20=95.49%, 50=0.48% 00:10:16.010 cpu : usr=4.48%, sys=14.94%, ctx=216, majf=0, minf=7 00:10:16.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:16.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.010 issued rwts: total=4935,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.010 00:10:16.010 Run status group 0 (all jobs): 00:10:16.010 READ: bw=57.7MiB/s (60.5MB/s), 8111KiB/s-19.2MiB/s (8306kB/s-20.1MB/s), io=58.3MiB (61.1MB), run=1003-1010msec 00:10:16.010 WRITE: bw=61.0MiB/s (64.0MB/s), 9152KiB/s-19.9MiB/s (9372kB/s-20.9MB/s), io=61.7MiB (64.7MB), run=1003-1010msec 00:10:16.010 00:10:16.010 Disk stats (read/write): 00:10:16.010 nvme0n1: ios=1900/2048, merge=0/0, ticks=22697/29036, in_queue=51733, util=88.57% 00:10:16.010 nvme0n2: ios=2673/3072, merge=0/0, ticks=53048/50153, in_queue=103201, util=88.97% 00:10:16.010 nvme0n3: ios=3936/4096, merge=0/0, ticks=12412/11952, in_queue=24364, util=89.30% 00:10:16.010 nvme0n4: ios=4096/4480, merge=0/0, ticks=51402/49667, in_queue=101069, util=89.75% 00:10:16.010 01:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:16.010 01:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80810 00:10:16.010 01:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:16.010 01:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:16.010 [global] 00:10:16.010 thread=1 00:10:16.010 invalidate=1 00:10:16.010 rw=read 00:10:16.010 time_based=1 00:10:16.010 runtime=10 00:10:16.010 ioengine=libaio 00:10:16.010 direct=1 00:10:16.010 bs=4096 00:10:16.010 iodepth=1 00:10:16.010 norandommap=1 00:10:16.010 numjobs=1 00:10:16.010 00:10:16.010 [job0] 00:10:16.010 filename=/dev/nvme0n1 00:10:16.010 [job1] 00:10:16.010 filename=/dev/nvme0n2 00:10:16.010 [job2] 00:10:16.010 filename=/dev/nvme0n3 00:10:16.010 [job3] 00:10:16.010 filename=/dev/nvme0n4 00:10:16.010 Could not set queue depth (nvme0n1) 00:10:16.010 Could not set queue depth (nvme0n2) 00:10:16.010 Could not set queue depth (nvme0n3) 00:10:16.010 Could not set queue depth (nvme0n4) 00:10:16.010 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.011 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.011 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.011 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.011 fio-3.35 00:10:16.011 Starting 4 threads 00:10:19.295 01:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:19.295 fio: pid=80853, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:19.295 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63926272, buflen=4096 00:10:19.295 01:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:19.553 fio: pid=80852, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:19.553 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=36847616, buflen=4096 00:10:19.553 01:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.553 01:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:19.553 fio: pid=80850, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:19.553 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=39649280, buflen=4096 00:10:19.811 01:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.811 01:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:19.811 fio: pid=80851, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:19.811 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46776320, buflen=4096 00:10:20.069 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.069 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:20.069 00:10:20.069 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80850: Thu Jul 25 01:53:35 2024 00:10:20.069 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(37.8MiB/3426msec) 00:10:20.069 slat (usec): min=13, max=15276, avg=25.61, stdev=260.16 00:10:20.069 clat (usec): min=4, max=2191, avg=325.49, stdev=59.54 00:10:20.069 lat (usec): min=144, max=15585, avg=351.10, stdev=267.36 00:10:20.069 clat percentiles (usec): 00:10:20.069 | 1.00th=[ 167], 5.00th=[ 229], 10.00th=[ 251], 20.00th=[ 302], 00:10:20.069 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:10:20.069 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 379], 00:10:20.069 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 668], 99.95th=[ 840], 00:10:20.069 | 99.99th=[ 2180] 00:10:20.069 bw ( KiB/s): min=10808, max=11776, per=22.17%, avg=11054.67, stdev=361.98, samples=6 00:10:20.069 iops : min= 2702, max= 2944, avg=2763.67, stdev=90.50, samples=6 00:10:20.069 lat (usec) : 10=0.01%, 250=9.78%, 500=88.22%, 750=1.90%, 1000=0.05% 00:10:20.069 lat (msec) : 2=0.01%, 4=0.01% 00:10:20.069 cpu : usr=1.20%, sys=5.31%, ctx=9686, majf=0, minf=1 00:10:20.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.069 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.069 issued rwts: total=9681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.069 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80851: Thu Jul 25 01:53:35 2024 00:10:20.069 read: IOPS=3114, BW=12.2MiB/s (12.8MB/s)(44.6MiB/3667msec) 00:10:20.069 slat (usec): min=8, max=15742, avg=21.53, stdev=241.47 00:10:20.069 clat (usec): min=121, max=4089, avg=297.70, stdev=109.25 00:10:20.069 lat (usec): min=133, max=16011, avg=319.23, stdev=264.10 00:10:20.069 clat percentiles (usec): 00:10:20.069 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 157], 20.00th=[ 212], 00:10:20.069 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:10:20.069 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 379], 00:10:20.069 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 963], 99.95th=[ 2638], 00:10:20.069 | 99.99th=[ 3949] 00:10:20.069 bw ( KiB/s): min=11056, max=16019, per=24.49%, avg=12211.86, stdev=1846.96, samples=7 00:10:20.069 iops : min= 2764, max= 4004, avg=3052.86, stdev=461.48, samples=7 00:10:20.069 lat (usec) : 250=26.26%, 500=73.53%, 750=0.09%, 1000=0.02% 00:10:20.069 lat (msec) : 2=0.04%, 4=0.05%, 10=0.01% 00:10:20.070 cpu : usr=1.06%, sys=4.72%, ctx=11429, majf=0, minf=1 00:10:20.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.070 issued rwts: total=11421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.070 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80852: Thu Jul 25 01:53:35 2024 00:10:20.070 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(35.1MiB/3182msec) 00:10:20.070 slat (usec): min=8, max=11481, avg=19.52, stdev=158.19 00:10:20.070 clat (usec): min=163, max=2253, avg=332.33, stdev=41.97 00:10:20.070 lat (usec): min=177, max=11779, avg=351.85, stdev=163.33 00:10:20.070 clat percentiles (usec): 00:10:20.070 | 1.00th=[ 229], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 314], 00:10:20.070 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:10:20.070 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 367], 95.00th=[ 379], 00:10:20.070 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 486], 99.95th=[ 515], 00:10:20.070 | 99.99th=[ 2245] 00:10:20.070 bw ( KiB/s): min=11048, max=11824, per=22.66%, avg=11298.67, stdev=288.78, samples=6 00:10:20.070 iops : min= 2762, max= 2956, avg=2824.67, stdev=72.20, samples=6 00:10:20.070 lat (usec) : 250=3.00%, 500=96.92%, 750=0.04% 00:10:20.070 lat (msec) : 2=0.01%, 4=0.01% 00:10:20.070 cpu : usr=1.26%, sys=4.34%, ctx=9002, majf=0, minf=1 00:10:20.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.070 issued rwts: total=8997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.070 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80853: Thu Jul 25 01:53:35 2024 00:10:20.070 read: IOPS=5334, BW=20.8MiB/s (21.8MB/s)(61.0MiB/2926msec) 00:10:20.070 slat (usec): min=11, max=101, avg=14.34, stdev= 3.67 00:10:20.070 clat (usec): min=129, max=2456, avg=171.34, stdev=29.08 00:10:20.070 lat (usec): min=141, max=2472, avg=185.69, stdev=29.52 00:10:20.070 clat percentiles (usec): 00:10:20.070 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:20.070 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:20.070 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:10:20.070 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 251], 99.95th=[ 293], 00:10:20.070 | 99.99th=[ 1532] 00:10:20.070 bw ( KiB/s): min=20560, max=22512, per=42.53%, avg=21204.80, stdev=765.93, samples=5 00:10:20.070 iops : min= 5140, max= 5628, avg=5301.20, stdev=191.48, samples=5 00:10:20.070 lat (usec) : 250=99.89%, 500=0.07%, 750=0.01%, 1000=0.01% 00:10:20.070 lat (msec) : 2=0.01%, 4=0.01% 00:10:20.070 cpu : usr=1.88%, sys=6.77%, ctx=15608, majf=0, minf=1 00:10:20.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.070 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.070 issued rwts: total=15608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.070 00:10:20.070 Run status group 0 (all jobs): 00:10:20.070 READ: bw=48.7MiB/s (51.0MB/s), 11.0MiB/s-20.8MiB/s (11.6MB/s-21.8MB/s), io=179MiB (187MB), run=2926-3667msec 00:10:20.070 00:10:20.070 Disk stats (read/write): 00:10:20.070 nvme0n1: ios=9499/0, merge=0/0, ticks=3121/0, in_queue=3121, util=95.02% 00:10:20.070 nvme0n2: ios=11146/0, merge=0/0, ticks=3241/0, in_queue=3241, util=95.08% 00:10:20.070 nvme0n3: ios=8815/0, merge=0/0, ticks=2844/0, in_queue=2844, util=96.15% 00:10:20.070 nvme0n4: ios=15306/0, merge=0/0, ticks=2660/0, in_queue=2660, util=96.73% 00:10:20.070 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.070 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:20.328 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.328 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:20.586 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.586 01:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:20.845 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.845 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 80810 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.103 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.362 nvmf hotplug test: fio failed as expected 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.362 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.362 rmmod nvme_tcp 00:10:21.621 rmmod nvme_fabrics 00:10:21.621 rmmod nvme_keyring 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 80435 ']' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 80435 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 80435 ']' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 80435 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80435 00:10:21.621 killing process with pid 80435 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80435' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 80435 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 80435 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:21.621 00:10:21.621 real 0m18.632s 00:10:21.621 user 1m10.504s 00:10:21.621 sys 0m9.955s 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.621 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.621 ************************************ 00:10:21.621 END TEST nvmf_fio_target 00:10:21.621 ************************************ 00:10:21.881 01:53:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.881 01:53:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.881 01:53:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.881 01:53:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.881 ************************************ 00:10:21.881 START TEST nvmf_bdevio 00:10:21.881 ************************************ 00:10:21.881 01:53:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.881 * Looking for test storage... 00:10:21.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:21.881 Cannot find device "nvmf_tgt_br" 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.881 Cannot find device "nvmf_tgt_br2" 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:21.881 Cannot find device "nvmf_tgt_br" 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:21.881 Cannot find device "nvmf_tgt_br2" 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:21.881 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:21.882 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:22.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:22.140 00:10:22.140 --- 10.0.0.2 ping statistics --- 00:10:22.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.140 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:22.140 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:22.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:22.141 00:10:22.141 --- 10.0.0.3 ping statistics --- 00:10:22.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.141 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:22.141 00:10:22.141 --- 10.0.0.1 ping statistics --- 00:10:22.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.141 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=81114 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 81114 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 81114 ']' 00:10:22.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.141 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.400 [2024-07-25 01:53:37.464802] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:22.400 [2024-07-25 01:53:37.464902] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.400 [2024-07-25 01:53:37.590985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:22.400 [2024-07-25 01:53:37.607814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.400 [2024-07-25 01:53:37.643105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.400 [2024-07-25 01:53:37.643642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.400 [2024-07-25 01:53:37.644186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.400 [2024-07-25 01:53:37.644633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.400 [2024-07-25 01:53:37.644960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.400 [2024-07-25 01:53:37.645367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.400 [2024-07-25 01:53:37.645436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.400 [2024-07-25 01:53:37.645742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.400 [2024-07-25 01:53:37.645777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.400 [2024-07-25 01:53:37.676832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 [2024-07-25 01:53:37.766608] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 Malloc0 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 [2024-07-25 01:53:37.828117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:22.659 { 00:10:22.659 "params": { 00:10:22.659 "name": "Nvme$subsystem", 00:10:22.659 "trtype": "$TEST_TRANSPORT", 00:10:22.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.659 "adrfam": "ipv4", 00:10:22.659 "trsvcid": "$NVMF_PORT", 00:10:22.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.659 "hdgst": ${hdgst:-false}, 00:10:22.659 "ddgst": ${ddgst:-false} 00:10:22.659 }, 00:10:22.659 "method": "bdev_nvme_attach_controller" 00:10:22.659 } 00:10:22.659 EOF 00:10:22.659 )") 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:22.659 01:53:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:22.659 "params": { 00:10:22.659 "name": "Nvme1", 00:10:22.659 "trtype": "tcp", 00:10:22.659 "traddr": "10.0.0.2", 00:10:22.659 "adrfam": "ipv4", 00:10:22.659 "trsvcid": "4420", 00:10:22.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.659 "hdgst": false, 00:10:22.659 "ddgst": false 00:10:22.659 }, 00:10:22.659 "method": "bdev_nvme_attach_controller" 00:10:22.659 }' 00:10:22.659 [2024-07-25 01:53:37.885285] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:10:22.659 [2024-07-25 01:53:37.885369] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81143 ] 00:10:22.918 [2024-07-25 01:53:38.012597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:22.918 [2024-07-25 01:53:38.031037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:22.918 [2024-07-25 01:53:38.067712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.918 [2024-07-25 01:53:38.067864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.918 [2024-07-25 01:53:38.067868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.918 [2024-07-25 01:53:38.105888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.918 I/O targets: 00:10:22.918 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:22.918 00:10:22.918 00:10:22.918 CUnit - A unit testing framework for C - Version 2.1-3 00:10:22.918 http://cunit.sourceforge.net/ 00:10:22.918 00:10:22.918 00:10:22.918 Suite: bdevio tests on: Nvme1n1 00:10:22.918 Test: blockdev write read block ...passed 00:10:22.918 Test: blockdev write zeroes read block ...passed 00:10:22.918 Test: blockdev write zeroes read no split ...passed 00:10:23.223 Test: blockdev write zeroes read split ...passed 00:10:23.223 Test: blockdev write zeroes read split partial ...passed 00:10:23.223 Test: blockdev reset ...[2024-07-25 01:53:38.231246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:23.223 [2024-07-25 01:53:38.231355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa44130 (9): Bad file descriptor 00:10:23.223 passed 00:10:23.223 Test: blockdev write read 8 blocks ...[2024-07-25 01:53:38.248380] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:23.223 passed 00:10:23.223 Test: blockdev write read size > 128k ...passed 00:10:23.223 Test: blockdev write read invalid size ...passed 00:10:23.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.223 Test: blockdev write read max offset ...passed 00:10:23.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.223 Test: blockdev writev readv 8 blocks ...passed 00:10:23.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.223 Test: blockdev writev readv block ...passed 00:10:23.223 Test: blockdev writev readv size > 128k ...passed 00:10:23.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.223 Test: blockdev comparev and writev ...[2024-07-25 01:53:38.256581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.256628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.256668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.257001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.257028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.257050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.257444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.257476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.257498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.257510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.257806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.257850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.223 [2024-07-25 01:53:38.257884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.223 [2024-07-25 01:53:38.257896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.223 passed 00:10:23.223 Test: blockdev nvme passthru rw ...passed 00:10:23.224 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:53:38.258772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.224 [2024-07-25 01:53:38.258800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:23.224 [2024-07-25 01:53:38.258944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.224 [2024-07-25 01:53:38.258972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:23.224 [2024-07-25 01:53:38.259090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.224 [2024-07-25 01:53:38.259114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:23.224 passed 00:10:23.224 Test: blockdev nvme admin passthru ...[2024-07-25 01:53:38.259215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.224 [2024-07-25 01:53:38.259246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:23.224 passed 00:10:23.224 Test: blockdev copy ...passed 00:10:23.224 00:10:23.224 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.224 suites 1 1 n/a 0 0 00:10:23.224 tests 23 23 23 0 0 00:10:23.224 asserts 152 152 152 0 n/a 00:10:23.224 00:10:23.224 Elapsed time = 0.147 seconds 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.224 rmmod nvme_tcp 00:10:23.224 rmmod nvme_fabrics 00:10:23.224 rmmod nvme_keyring 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 81114 ']' 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 81114 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 81114 ']' 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 81114 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.224 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81114 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:23.483 killing process with pid 81114 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81114' 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 81114 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 81114 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:23.483 00:10:23.483 real 0m1.765s 00:10:23.483 user 0m5.017s 00:10:23.483 sys 0m0.587s 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.483 ************************************ 00:10:23.483 END TEST nvmf_bdevio 00:10:23.483 ************************************ 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:23.483 ************************************ 00:10:23.483 END TEST nvmf_target_core 00:10:23.483 ************************************ 00:10:23.483 00:10:23.483 real 2m22.235s 00:10:23.483 user 6m17.994s 00:10:23.483 sys 0m51.291s 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.483 01:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 01:53:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:23.743 01:53:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.743 01:53:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.743 01:53:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 ************************************ 00:10:23.743 START TEST nvmf_target_extra 00:10:23.743 ************************************ 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:23.743 * Looking for test storage... 00:10:23.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.743 01:53:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 ************************************ 00:10:23.743 START TEST nvmf_auth_target 00:10:23.743 ************************************ 00:10:23.744 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:23.744 * Looking for test storage... 00:10:23.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.744 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.744 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.744 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:24.003 Cannot find device "nvmf_tgt_br" 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.003 Cannot find device "nvmf_tgt_br2" 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:24.003 Cannot find device "nvmf_tgt_br" 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:24.003 Cannot find device "nvmf_tgt_br2" 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:24.003 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.261 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.261 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.261 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.261 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:24.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:24.261 00:10:24.261 --- 10.0.0.2 ping statistics --- 00:10:24.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.261 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:24.261 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:24.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:24.261 00:10:24.261 --- 10.0.0.3 ping statistics --- 00:10:24.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.262 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:24.262 00:10:24.262 --- 10.0.0.1 ping statistics --- 00:10:24.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.262 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=81363 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 81363 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81363 ']' 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.262 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=81395 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=76aada060c851f002a66eee5e4dcb72c8961d66106df4e14 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nqZ 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 76aada060c851f002a66eee5e4dcb72c8961d66106df4e14 0 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 76aada060c851f002a66eee5e4dcb72c8961d66106df4e14 0 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=76aada060c851f002a66eee5e4dcb72c8961d66106df4e14 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nqZ 00:10:25.195 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nqZ 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.nqZ 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea248ffda374b47c33eb1634f706c17c22a14e6c8cbedec8f6fe0bb25bb15f92 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4Ek 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea248ffda374b47c33eb1634f706c17c22a14e6c8cbedec8f6fe0bb25bb15f92 3 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea248ffda374b47c33eb1634f706c17c22a14e6c8cbedec8f6fe0bb25bb15f92 3 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea248ffda374b47c33eb1634f706c17c22a14e6c8cbedec8f6fe0bb25bb15f92 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4Ek 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4Ek 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.4Ek 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=14070b13432c3ff0eac3075b5a861c0a 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gfR 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 14070b13432c3ff0eac3075b5a861c0a 1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 14070b13432c3ff0eac3075b5a861c0a 1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=14070b13432c3ff0eac3075b5a861c0a 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gfR 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gfR 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.gfR 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5ca5ba788e33c64dcba486fc76e7fa2d339f3a72ee145ca8 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zZg 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5ca5ba788e33c64dcba486fc76e7fa2d339f3a72ee145ca8 2 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5ca5ba788e33c64dcba486fc76e7fa2d339f3a72ee145ca8 2 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5ca5ba788e33c64dcba486fc76e7fa2d339f3a72ee145ca8 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zZg 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zZg 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.zZg 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f9fd68fc5b4df3d76ac3f5cc030c1b8c9245485edd7d040b 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5tm 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f9fd68fc5b4df3d76ac3f5cc030c1b8c9245485edd7d040b 2 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f9fd68fc5b4df3d76ac3f5cc030c1b8c9245485edd7d040b 2 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f9fd68fc5b4df3d76ac3f5cc030c1b8c9245485edd7d040b 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:25.454 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5tm 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5tm 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.5tm 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=df81ed277c2fae351f5110c85cb49ab7 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FWE 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key df81ed277c2fae351f5110c85cb49ab7 1 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 df81ed277c2fae351f5110c85cb49ab7 1 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=df81ed277c2fae351f5110c85cb49ab7 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.713 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FWE 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FWE 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.FWE 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fe3e778b231af8261efbd635ce9f43fa4167b8d0c032084c71cc01f9164e3a54 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zG1 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fe3e778b231af8261efbd635ce9f43fa4167b8d0c032084c71cc01f9164e3a54 3 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fe3e778b231af8261efbd635ce9f43fa4167b8d0c032084c71cc01f9164e3a54 3 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fe3e778b231af8261efbd635ce9f43fa4167b8d0c032084c71cc01f9164e3a54 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zG1 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zG1 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.zG1 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 81363 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81363 ']' 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.714 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 81395 /var/tmp/host.sock 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81395 ']' 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.972 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nqZ 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nqZ 00:10:26.231 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nqZ 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.4Ek ]] 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Ek 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Ek 00:10:26.489 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Ek 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gfR 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gfR 00:10:26.748 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gfR 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.zZg ]] 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zZg 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zZg 00:10:27.007 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zZg 00:10:27.265 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:27.265 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5tm 00:10:27.265 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.265 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.266 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.266 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5tm 00:10:27.266 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5tm 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.FWE ]] 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FWE 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FWE 00:10:27.524 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FWE 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zG1 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zG1 00:10:27.783 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zG1 00:10:28.042 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:28.042 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:28.042 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.042 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.042 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.042 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.301 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.560 00:10:28.560 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.560 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.560 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.819 { 00:10:28.819 "cntlid": 1, 00:10:28.819 "qid": 0, 00:10:28.819 "state": "enabled", 00:10:28.819 "thread": "nvmf_tgt_poll_group_000", 00:10:28.819 "listen_address": { 00:10:28.819 "trtype": "TCP", 00:10:28.819 "adrfam": "IPv4", 00:10:28.819 "traddr": "10.0.0.2", 00:10:28.819 "trsvcid": "4420" 00:10:28.819 }, 00:10:28.819 "peer_address": { 00:10:28.819 "trtype": "TCP", 00:10:28.819 "adrfam": "IPv4", 00:10:28.819 "traddr": "10.0.0.1", 00:10:28.819 "trsvcid": "51554" 00:10:28.819 }, 00:10:28.819 "auth": { 00:10:28.819 "state": "completed", 00:10:28.819 "digest": "sha256", 00:10:28.819 "dhgroup": "null" 00:10:28.819 } 00:10:28.819 } 00:10:28.819 ]' 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.819 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.078 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:29.078 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.078 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.078 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.078 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.338 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:33.530 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.789 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.048 00:10:34.048 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.048 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.048 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.307 { 00:10:34.307 "cntlid": 3, 00:10:34.307 "qid": 0, 00:10:34.307 "state": "enabled", 00:10:34.307 "thread": "nvmf_tgt_poll_group_000", 00:10:34.307 "listen_address": { 00:10:34.307 "trtype": "TCP", 00:10:34.307 "adrfam": "IPv4", 00:10:34.307 "traddr": "10.0.0.2", 00:10:34.307 "trsvcid": "4420" 00:10:34.307 }, 00:10:34.307 "peer_address": { 00:10:34.307 "trtype": "TCP", 00:10:34.307 "adrfam": "IPv4", 00:10:34.307 "traddr": "10.0.0.1", 00:10:34.307 "trsvcid": "37414" 00:10:34.307 }, 00:10:34.307 "auth": { 00:10:34.307 "state": "completed", 00:10:34.307 "digest": "sha256", 00:10:34.307 "dhgroup": "null" 00:10:34.307 } 00:10:34.307 } 00:10:34.307 ]' 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.307 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.566 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.503 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.762 00:10:35.762 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.762 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.762 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.021 { 00:10:36.021 "cntlid": 5, 00:10:36.021 "qid": 0, 00:10:36.021 "state": "enabled", 00:10:36.021 "thread": "nvmf_tgt_poll_group_000", 00:10:36.021 "listen_address": { 00:10:36.021 "trtype": "TCP", 00:10:36.021 "adrfam": "IPv4", 00:10:36.021 "traddr": "10.0.0.2", 00:10:36.021 "trsvcid": "4420" 00:10:36.021 }, 00:10:36.021 "peer_address": { 00:10:36.021 "trtype": "TCP", 00:10:36.021 "adrfam": "IPv4", 00:10:36.021 "traddr": "10.0.0.1", 00:10:36.021 "trsvcid": "37444" 00:10:36.021 }, 00:10:36.021 "auth": { 00:10:36.021 "state": "completed", 00:10:36.021 "digest": "sha256", 00:10:36.021 "dhgroup": "null" 00:10:36.021 } 00:10:36.021 } 00:10:36.021 ]' 00:10:36.021 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.279 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.536 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:10:37.102 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.102 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:37.102 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.103 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.103 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.103 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.103 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:37.103 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.361 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.620 00:10:37.620 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.620 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.620 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.880 { 00:10:37.880 "cntlid": 7, 00:10:37.880 "qid": 0, 00:10:37.880 "state": "enabled", 00:10:37.880 "thread": "nvmf_tgt_poll_group_000", 00:10:37.880 "listen_address": { 00:10:37.880 "trtype": "TCP", 00:10:37.880 "adrfam": "IPv4", 00:10:37.880 "traddr": "10.0.0.2", 00:10:37.880 "trsvcid": "4420" 00:10:37.880 }, 00:10:37.880 "peer_address": { 00:10:37.880 "trtype": "TCP", 00:10:37.880 "adrfam": "IPv4", 00:10:37.880 "traddr": "10.0.0.1", 00:10:37.880 "trsvcid": "37474" 00:10:37.880 }, 00:10:37.880 "auth": { 00:10:37.880 "state": "completed", 00:10:37.880 "digest": "sha256", 00:10:37.880 "dhgroup": "null" 00:10:37.880 } 00:10:37.880 } 00:10:37.880 ]' 00:10:37.880 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.880 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.139 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.074 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.641 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.641 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.641 { 00:10:39.641 "cntlid": 9, 00:10:39.641 "qid": 0, 00:10:39.641 "state": "enabled", 00:10:39.641 "thread": "nvmf_tgt_poll_group_000", 00:10:39.641 "listen_address": { 00:10:39.641 "trtype": "TCP", 00:10:39.641 "adrfam": "IPv4", 00:10:39.641 "traddr": "10.0.0.2", 00:10:39.641 "trsvcid": "4420" 00:10:39.641 }, 00:10:39.641 "peer_address": { 00:10:39.642 "trtype": "TCP", 00:10:39.642 "adrfam": "IPv4", 00:10:39.642 "traddr": "10.0.0.1", 00:10:39.642 "trsvcid": "37496" 00:10:39.642 }, 00:10:39.642 "auth": { 00:10:39.642 "state": "completed", 00:10:39.642 "digest": "sha256", 00:10:39.642 "dhgroup": "ffdhe2048" 00:10:39.642 } 00:10:39.642 } 00:10:39.642 ]' 00:10:39.642 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.900 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.900 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.901 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:39.901 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.901 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.901 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.901 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.159 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.727 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.986 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.244 00:10:41.244 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.244 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.244 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.503 { 00:10:41.503 "cntlid": 11, 00:10:41.503 "qid": 0, 00:10:41.503 "state": "enabled", 00:10:41.503 "thread": "nvmf_tgt_poll_group_000", 00:10:41.503 "listen_address": { 00:10:41.503 "trtype": "TCP", 00:10:41.503 "adrfam": "IPv4", 00:10:41.503 "traddr": "10.0.0.2", 00:10:41.503 "trsvcid": "4420" 00:10:41.503 }, 00:10:41.503 "peer_address": { 00:10:41.503 "trtype": "TCP", 00:10:41.503 "adrfam": "IPv4", 00:10:41.503 "traddr": "10.0.0.1", 00:10:41.503 "trsvcid": "37522" 00:10:41.503 }, 00:10:41.503 "auth": { 00:10:41.503 "state": "completed", 00:10:41.503 "digest": "sha256", 00:10:41.503 "dhgroup": "ffdhe2048" 00:10:41.503 } 00:10:41.503 } 00:10:41.503 ]' 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.503 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.070 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.638 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.897 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.156 00:10:43.156 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.156 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.156 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.415 { 00:10:43.415 "cntlid": 13, 00:10:43.415 "qid": 0, 00:10:43.415 "state": "enabled", 00:10:43.415 "thread": "nvmf_tgt_poll_group_000", 00:10:43.415 "listen_address": { 00:10:43.415 "trtype": "TCP", 00:10:43.415 "adrfam": "IPv4", 00:10:43.415 "traddr": "10.0.0.2", 00:10:43.415 "trsvcid": "4420" 00:10:43.415 }, 00:10:43.415 "peer_address": { 00:10:43.415 "trtype": "TCP", 00:10:43.415 "adrfam": "IPv4", 00:10:43.415 "traddr": "10.0.0.1", 00:10:43.415 "trsvcid": "52936" 00:10:43.415 }, 00:10:43.415 "auth": { 00:10:43.415 "state": "completed", 00:10:43.415 "digest": "sha256", 00:10:43.415 "dhgroup": "ffdhe2048" 00:10:43.415 } 00:10:43.415 } 00:10:43.415 ]' 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.415 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.674 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.610 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.611 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:45.178 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.178 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.437 { 00:10:45.437 "cntlid": 15, 00:10:45.437 "qid": 0, 00:10:45.437 "state": "enabled", 00:10:45.437 "thread": "nvmf_tgt_poll_group_000", 00:10:45.437 "listen_address": { 00:10:45.437 "trtype": "TCP", 00:10:45.437 "adrfam": "IPv4", 00:10:45.437 "traddr": "10.0.0.2", 00:10:45.437 "trsvcid": "4420" 00:10:45.437 }, 00:10:45.437 "peer_address": { 00:10:45.437 "trtype": "TCP", 00:10:45.437 "adrfam": "IPv4", 00:10:45.437 "traddr": "10.0.0.1", 00:10:45.437 "trsvcid": "52980" 00:10:45.437 }, 00:10:45.437 "auth": { 00:10:45.437 "state": "completed", 00:10:45.437 "digest": "sha256", 00:10:45.437 "dhgroup": "ffdhe2048" 00:10:45.437 } 00:10:45.437 } 00:10:45.437 ]' 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.437 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.696 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:46.630 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.893 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.152 00:10:47.152 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.152 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.152 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.410 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.410 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.411 { 00:10:47.411 "cntlid": 17, 00:10:47.411 "qid": 0, 00:10:47.411 "state": "enabled", 00:10:47.411 "thread": "nvmf_tgt_poll_group_000", 00:10:47.411 "listen_address": { 00:10:47.411 "trtype": "TCP", 00:10:47.411 "adrfam": "IPv4", 00:10:47.411 "traddr": "10.0.0.2", 00:10:47.411 "trsvcid": "4420" 00:10:47.411 }, 00:10:47.411 "peer_address": { 00:10:47.411 "trtype": "TCP", 00:10:47.411 "adrfam": "IPv4", 00:10:47.411 "traddr": "10.0.0.1", 00:10:47.411 "trsvcid": "53026" 00:10:47.411 }, 00:10:47.411 "auth": { 00:10:47.411 "state": "completed", 00:10:47.411 "digest": "sha256", 00:10:47.411 "dhgroup": "ffdhe3072" 00:10:47.411 } 00:10:47.411 } 00:10:47.411 ]' 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:47.411 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.669 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.669 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.669 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.928 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.493 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.751 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.752 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.752 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.010 00:10:49.010 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.010 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.010 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.268 { 00:10:49.268 "cntlid": 19, 00:10:49.268 "qid": 0, 00:10:49.268 "state": "enabled", 00:10:49.268 "thread": "nvmf_tgt_poll_group_000", 00:10:49.268 "listen_address": { 00:10:49.268 "trtype": "TCP", 00:10:49.268 "adrfam": "IPv4", 00:10:49.268 "traddr": "10.0.0.2", 00:10:49.268 "trsvcid": "4420" 00:10:49.268 }, 00:10:49.268 "peer_address": { 00:10:49.268 "trtype": "TCP", 00:10:49.268 "adrfam": "IPv4", 00:10:49.268 "traddr": "10.0.0.1", 00:10:49.268 "trsvcid": "53048" 00:10:49.268 }, 00:10:49.268 "auth": { 00:10:49.268 "state": "completed", 00:10:49.268 "digest": "sha256", 00:10:49.268 "dhgroup": "ffdhe3072" 00:10:49.268 } 00:10:49.268 } 00:10:49.268 ]' 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.268 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.526 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:49.526 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.526 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.526 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.526 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.784 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.350 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.609 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.867 00:10:50.867 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.867 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.867 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.125 { 00:10:51.125 "cntlid": 21, 00:10:51.125 "qid": 0, 00:10:51.125 "state": "enabled", 00:10:51.125 "thread": "nvmf_tgt_poll_group_000", 00:10:51.125 "listen_address": { 00:10:51.125 "trtype": "TCP", 00:10:51.125 "adrfam": "IPv4", 00:10:51.125 "traddr": "10.0.0.2", 00:10:51.125 "trsvcid": "4420" 00:10:51.125 }, 00:10:51.125 "peer_address": { 00:10:51.125 "trtype": "TCP", 00:10:51.125 "adrfam": "IPv4", 00:10:51.125 "traddr": "10.0.0.1", 00:10:51.125 "trsvcid": "53062" 00:10:51.125 }, 00:10:51.125 "auth": { 00:10:51.125 "state": "completed", 00:10:51.125 "digest": "sha256", 00:10:51.125 "dhgroup": "ffdhe3072" 00:10:51.125 } 00:10:51.125 } 00:10:51.125 ]' 00:10:51.125 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.382 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.640 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.206 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:52.464 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:53.029 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.029 { 00:10:53.029 "cntlid": 23, 00:10:53.029 "qid": 0, 00:10:53.029 "state": "enabled", 00:10:53.029 "thread": "nvmf_tgt_poll_group_000", 00:10:53.029 "listen_address": { 00:10:53.029 "trtype": "TCP", 00:10:53.029 "adrfam": "IPv4", 00:10:53.029 "traddr": "10.0.0.2", 00:10:53.029 "trsvcid": "4420" 00:10:53.029 }, 00:10:53.029 "peer_address": { 00:10:53.029 "trtype": "TCP", 00:10:53.029 "adrfam": "IPv4", 00:10:53.029 "traddr": "10.0.0.1", 00:10:53.029 "trsvcid": "37972" 00:10:53.029 }, 00:10:53.029 "auth": { 00:10:53.029 "state": "completed", 00:10:53.029 "digest": "sha256", 00:10:53.029 "dhgroup": "ffdhe3072" 00:10:53.029 } 00:10:53.029 } 00:10:53.029 ]' 00:10:53.029 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.287 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.545 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:54.112 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.370 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.629 00:10:54.629 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.629 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.629 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.888 { 00:10:54.888 "cntlid": 25, 00:10:54.888 "qid": 0, 00:10:54.888 "state": "enabled", 00:10:54.888 "thread": "nvmf_tgt_poll_group_000", 00:10:54.888 "listen_address": { 00:10:54.888 "trtype": "TCP", 00:10:54.888 "adrfam": "IPv4", 00:10:54.888 "traddr": "10.0.0.2", 00:10:54.888 "trsvcid": "4420" 00:10:54.888 }, 00:10:54.888 "peer_address": { 00:10:54.888 "trtype": "TCP", 00:10:54.888 "adrfam": "IPv4", 00:10:54.888 "traddr": "10.0.0.1", 00:10:54.888 "trsvcid": "37986" 00:10:54.888 }, 00:10:54.888 "auth": { 00:10:54.888 "state": "completed", 00:10:54.888 "digest": "sha256", 00:10:54.888 "dhgroup": "ffdhe4096" 00:10:54.888 } 00:10:54.888 } 00:10:54.888 ]' 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.888 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.150 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:55.150 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.150 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.150 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.150 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.409 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.978 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.237 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.495 00:10:56.495 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.495 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.495 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.754 { 00:10:56.754 "cntlid": 27, 00:10:56.754 "qid": 0, 00:10:56.754 "state": "enabled", 00:10:56.754 "thread": "nvmf_tgt_poll_group_000", 00:10:56.754 "listen_address": { 00:10:56.754 "trtype": "TCP", 00:10:56.754 "adrfam": "IPv4", 00:10:56.754 "traddr": "10.0.0.2", 00:10:56.754 "trsvcid": "4420" 00:10:56.754 }, 00:10:56.754 "peer_address": { 00:10:56.754 "trtype": "TCP", 00:10:56.754 "adrfam": "IPv4", 00:10:56.754 "traddr": "10.0.0.1", 00:10:56.754 "trsvcid": "38014" 00:10:56.754 }, 00:10:56.754 "auth": { 00:10:56.754 "state": "completed", 00:10:56.754 "digest": "sha256", 00:10:56.754 "dhgroup": "ffdhe4096" 00:10:56.754 } 00:10:56.754 } 00:10:56.754 ]' 00:10:56.754 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.754 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.754 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.754 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.754 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.013 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.013 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.013 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.272 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:10:57.839 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.839 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.098 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.356 00:10:58.356 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.356 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.356 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.613 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.613 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.613 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.613 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.613 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.613 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.613 { 00:10:58.613 "cntlid": 29, 00:10:58.613 "qid": 0, 00:10:58.613 "state": "enabled", 00:10:58.613 "thread": "nvmf_tgt_poll_group_000", 00:10:58.613 "listen_address": { 00:10:58.613 "trtype": "TCP", 00:10:58.613 "adrfam": "IPv4", 00:10:58.613 "traddr": "10.0.0.2", 00:10:58.613 "trsvcid": "4420" 00:10:58.613 }, 00:10:58.613 "peer_address": { 00:10:58.613 "trtype": "TCP", 00:10:58.613 "adrfam": "IPv4", 00:10:58.613 "traddr": "10.0.0.1", 00:10:58.613 "trsvcid": "38042" 00:10:58.613 }, 00:10:58.613 "auth": { 00:10:58.613 "state": "completed", 00:10:58.613 "digest": "sha256", 00:10:58.613 "dhgroup": "ffdhe4096" 00:10:58.614 } 00:10:58.614 } 00:10:58.614 ]' 00:10:58.614 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.614 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.614 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.872 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.872 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.872 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.872 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.872 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.130 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:10:59.697 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.697 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:10:59.697 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.697 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.697 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.698 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.698 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:59.698 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.956 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.215 00:11:00.215 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.215 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.215 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.474 { 00:11:00.474 "cntlid": 31, 00:11:00.474 "qid": 0, 00:11:00.474 "state": "enabled", 00:11:00.474 "thread": "nvmf_tgt_poll_group_000", 00:11:00.474 "listen_address": { 00:11:00.474 "trtype": "TCP", 00:11:00.474 "adrfam": "IPv4", 00:11:00.474 "traddr": "10.0.0.2", 00:11:00.474 "trsvcid": "4420" 00:11:00.474 }, 00:11:00.474 "peer_address": { 00:11:00.474 "trtype": "TCP", 00:11:00.474 "adrfam": "IPv4", 00:11:00.474 "traddr": "10.0.0.1", 00:11:00.474 "trsvcid": "38062" 00:11:00.474 }, 00:11:00.474 "auth": { 00:11:00.474 "state": "completed", 00:11:00.474 "digest": "sha256", 00:11:00.474 "dhgroup": "ffdhe4096" 00:11:00.474 } 00:11:00.474 } 00:11:00.474 ]' 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.474 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.733 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:00.733 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.733 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.733 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.733 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.992 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.560 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.819 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.387 00:11:02.387 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.387 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.387 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.646 { 00:11:02.646 "cntlid": 33, 00:11:02.646 "qid": 0, 00:11:02.646 "state": "enabled", 00:11:02.646 "thread": "nvmf_tgt_poll_group_000", 00:11:02.646 "listen_address": { 00:11:02.646 "trtype": "TCP", 00:11:02.646 "adrfam": "IPv4", 00:11:02.646 "traddr": "10.0.0.2", 00:11:02.646 "trsvcid": "4420" 00:11:02.646 }, 00:11:02.646 "peer_address": { 00:11:02.646 "trtype": "TCP", 00:11:02.646 "adrfam": "IPv4", 00:11:02.646 "traddr": "10.0.0.1", 00:11:02.646 "trsvcid": "38076" 00:11:02.646 }, 00:11:02.646 "auth": { 00:11:02.646 "state": "completed", 00:11:02.646 "digest": "sha256", 00:11:02.646 "dhgroup": "ffdhe6144" 00:11:02.646 } 00:11:02.646 } 00:11:02.646 ]' 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.646 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.905 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.843 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.843 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.410 00:11:04.410 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.410 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.410 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.668 { 00:11:04.668 "cntlid": 35, 00:11:04.668 "qid": 0, 00:11:04.668 "state": "enabled", 00:11:04.668 "thread": "nvmf_tgt_poll_group_000", 00:11:04.668 "listen_address": { 00:11:04.668 "trtype": "TCP", 00:11:04.668 "adrfam": "IPv4", 00:11:04.668 "traddr": "10.0.0.2", 00:11:04.668 "trsvcid": "4420" 00:11:04.668 }, 00:11:04.668 "peer_address": { 00:11:04.668 "trtype": "TCP", 00:11:04.668 "adrfam": "IPv4", 00:11:04.668 "traddr": "10.0.0.1", 00:11:04.668 "trsvcid": "38394" 00:11:04.668 }, 00:11:04.668 "auth": { 00:11:04.668 "state": "completed", 00:11:04.668 "digest": "sha256", 00:11:04.668 "dhgroup": "ffdhe6144" 00:11:04.668 } 00:11:04.668 } 00:11:04.668 ]' 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.668 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.927 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:04.927 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.927 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.927 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.927 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.186 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:05.762 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:05.762 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.022 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.023 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.023 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.023 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.023 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.023 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.589 00:11:06.589 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.589 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.589 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.848 { 00:11:06.848 "cntlid": 37, 00:11:06.848 "qid": 0, 00:11:06.848 "state": "enabled", 00:11:06.848 "thread": "nvmf_tgt_poll_group_000", 00:11:06.848 "listen_address": { 00:11:06.848 "trtype": "TCP", 00:11:06.848 "adrfam": "IPv4", 00:11:06.848 "traddr": "10.0.0.2", 00:11:06.848 "trsvcid": "4420" 00:11:06.848 }, 00:11:06.848 "peer_address": { 00:11:06.848 "trtype": "TCP", 00:11:06.848 "adrfam": "IPv4", 00:11:06.848 "traddr": "10.0.0.1", 00:11:06.848 "trsvcid": "38434" 00:11:06.848 }, 00:11:06.848 "auth": { 00:11:06.848 "state": "completed", 00:11:06.848 "digest": "sha256", 00:11:06.848 "dhgroup": "ffdhe6144" 00:11:06.848 } 00:11:06.848 } 00:11:06.848 ]' 00:11:06.848 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.848 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.107 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:08.043 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.043 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:08.043 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.043 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.043 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.609 00:11:08.609 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.609 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.609 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.868 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.868 { 00:11:08.868 "cntlid": 39, 00:11:08.868 "qid": 0, 00:11:08.868 "state": "enabled", 00:11:08.868 "thread": "nvmf_tgt_poll_group_000", 00:11:08.868 "listen_address": { 00:11:08.868 "trtype": "TCP", 00:11:08.868 "adrfam": "IPv4", 00:11:08.868 "traddr": "10.0.0.2", 00:11:08.868 "trsvcid": "4420" 00:11:08.868 }, 00:11:08.868 "peer_address": { 00:11:08.868 "trtype": "TCP", 00:11:08.868 "adrfam": "IPv4", 00:11:08.868 "traddr": "10.0.0.1", 00:11:08.868 "trsvcid": "38460" 00:11:08.868 }, 00:11:08.868 "auth": { 00:11:08.868 "state": "completed", 00:11:08.868 "digest": "sha256", 00:11:08.868 "dhgroup": "ffdhe6144" 00:11:08.868 } 00:11:08.868 } 00:11:08.868 ]' 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.868 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.126 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:10.061 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.062 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.629 00:11:10.893 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.893 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.893 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.893 { 00:11:10.893 "cntlid": 41, 00:11:10.893 "qid": 0, 00:11:10.893 "state": "enabled", 00:11:10.893 "thread": "nvmf_tgt_poll_group_000", 00:11:10.893 "listen_address": { 00:11:10.893 "trtype": "TCP", 00:11:10.893 "adrfam": "IPv4", 00:11:10.893 "traddr": "10.0.0.2", 00:11:10.893 "trsvcid": "4420" 00:11:10.893 }, 00:11:10.893 "peer_address": { 00:11:10.893 "trtype": "TCP", 00:11:10.893 "adrfam": "IPv4", 00:11:10.893 "traddr": "10.0.0.1", 00:11:10.893 "trsvcid": "38490" 00:11:10.893 }, 00:11:10.893 "auth": { 00:11:10.893 "state": "completed", 00:11:10.893 "digest": "sha256", 00:11:10.893 "dhgroup": "ffdhe8192" 00:11:10.893 } 00:11:10.893 } 00:11:10.893 ]' 00:11:10.893 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.152 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.411 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:11.978 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.236 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.172 00:11:13.172 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.173 { 00:11:13.173 "cntlid": 43, 00:11:13.173 "qid": 0, 00:11:13.173 "state": "enabled", 00:11:13.173 "thread": "nvmf_tgt_poll_group_000", 00:11:13.173 "listen_address": { 00:11:13.173 "trtype": "TCP", 00:11:13.173 "adrfam": "IPv4", 00:11:13.173 "traddr": "10.0.0.2", 00:11:13.173 "trsvcid": "4420" 00:11:13.173 }, 00:11:13.173 "peer_address": { 00:11:13.173 "trtype": "TCP", 00:11:13.173 "adrfam": "IPv4", 00:11:13.173 "traddr": "10.0.0.1", 00:11:13.173 "trsvcid": "43296" 00:11:13.173 }, 00:11:13.173 "auth": { 00:11:13.173 "state": "completed", 00:11:13.173 "digest": "sha256", 00:11:13.173 "dhgroup": "ffdhe8192" 00:11:13.173 } 00:11:13.173 } 00:11:13.173 ]' 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.173 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.431 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:13.431 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.431 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.431 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.431 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.690 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:14.257 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:14.515 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.516 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.083 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.341 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.600 { 00:11:15.600 "cntlid": 45, 00:11:15.600 "qid": 0, 00:11:15.600 "state": "enabled", 00:11:15.600 "thread": "nvmf_tgt_poll_group_000", 00:11:15.600 "listen_address": { 00:11:15.600 "trtype": "TCP", 00:11:15.600 "adrfam": "IPv4", 00:11:15.600 "traddr": "10.0.0.2", 00:11:15.600 "trsvcid": "4420" 00:11:15.600 }, 00:11:15.600 "peer_address": { 00:11:15.600 "trtype": "TCP", 00:11:15.600 "adrfam": "IPv4", 00:11:15.600 "traddr": "10.0.0.1", 00:11:15.600 "trsvcid": "43336" 00:11:15.600 }, 00:11:15.600 "auth": { 00:11:15.600 "state": "completed", 00:11:15.600 "digest": "sha256", 00:11:15.600 "dhgroup": "ffdhe8192" 00:11:15.600 } 00:11:15.600 } 00:11:15.600 ]' 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.600 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.858 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.426 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.685 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.944 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.944 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:16.944 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.511 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.511 { 00:11:17.511 "cntlid": 47, 00:11:17.511 "qid": 0, 00:11:17.511 "state": "enabled", 00:11:17.511 "thread": "nvmf_tgt_poll_group_000", 00:11:17.511 "listen_address": { 00:11:17.511 "trtype": "TCP", 00:11:17.511 "adrfam": "IPv4", 00:11:17.511 "traddr": "10.0.0.2", 00:11:17.511 "trsvcid": "4420" 00:11:17.511 }, 00:11:17.511 "peer_address": { 00:11:17.511 "trtype": "TCP", 00:11:17.511 "adrfam": "IPv4", 00:11:17.511 "traddr": "10.0.0.1", 00:11:17.511 "trsvcid": "43374" 00:11:17.511 }, 00:11:17.511 "auth": { 00:11:17.511 "state": "completed", 00:11:17.511 "digest": "sha256", 00:11:17.511 "dhgroup": "ffdhe8192" 00:11:17.511 } 00:11:17.511 } 00:11:17.511 ]' 00:11:17.511 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.770 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.029 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:18.595 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.853 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.111 00:11:19.111 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.111 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.111 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.369 { 00:11:19.369 "cntlid": 49, 00:11:19.369 "qid": 0, 00:11:19.369 "state": "enabled", 00:11:19.369 "thread": "nvmf_tgt_poll_group_000", 00:11:19.369 "listen_address": { 00:11:19.369 "trtype": "TCP", 00:11:19.369 "adrfam": "IPv4", 00:11:19.369 "traddr": "10.0.0.2", 00:11:19.369 "trsvcid": "4420" 00:11:19.369 }, 00:11:19.369 "peer_address": { 00:11:19.369 "trtype": "TCP", 00:11:19.369 "adrfam": "IPv4", 00:11:19.369 "traddr": "10.0.0.1", 00:11:19.369 "trsvcid": "43410" 00:11:19.369 }, 00:11:19.369 "auth": { 00:11:19.369 "state": "completed", 00:11:19.369 "digest": "sha384", 00:11:19.369 "dhgroup": "null" 00:11:19.369 } 00:11:19.369 } 00:11:19.369 ]' 00:11:19.369 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.628 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.886 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:20.453 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.712 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.970 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.970 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.970 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.229 00:11:21.229 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.229 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.229 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.488 { 00:11:21.488 "cntlid": 51, 00:11:21.488 "qid": 0, 00:11:21.488 "state": "enabled", 00:11:21.488 "thread": "nvmf_tgt_poll_group_000", 00:11:21.488 "listen_address": { 00:11:21.488 "trtype": "TCP", 00:11:21.488 "adrfam": "IPv4", 00:11:21.488 "traddr": "10.0.0.2", 00:11:21.488 "trsvcid": "4420" 00:11:21.488 }, 00:11:21.488 "peer_address": { 00:11:21.488 "trtype": "TCP", 00:11:21.488 "adrfam": "IPv4", 00:11:21.488 "traddr": "10.0.0.1", 00:11:21.488 "trsvcid": "43440" 00:11:21.488 }, 00:11:21.488 "auth": { 00:11:21.488 "state": "completed", 00:11:21.488 "digest": "sha384", 00:11:21.488 "dhgroup": "null" 00:11:21.488 } 00:11:21.488 } 00:11:21.488 ]' 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.488 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.747 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:22.315 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.315 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:22.315 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.315 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.574 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.833 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.833 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.833 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.833 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.092 { 00:11:23.092 "cntlid": 53, 00:11:23.092 "qid": 0, 00:11:23.092 "state": "enabled", 00:11:23.092 "thread": "nvmf_tgt_poll_group_000", 00:11:23.092 "listen_address": { 00:11:23.092 "trtype": "TCP", 00:11:23.092 "adrfam": "IPv4", 00:11:23.092 "traddr": "10.0.0.2", 00:11:23.092 "trsvcid": "4420" 00:11:23.092 }, 00:11:23.092 "peer_address": { 00:11:23.092 "trtype": "TCP", 00:11:23.092 "adrfam": "IPv4", 00:11:23.092 "traddr": "10.0.0.1", 00:11:23.092 "trsvcid": "55940" 00:11:23.092 }, 00:11:23.092 "auth": { 00:11:23.092 "state": "completed", 00:11:23.092 "digest": "sha384", 00:11:23.092 "dhgroup": "null" 00:11:23.092 } 00:11:23.092 } 00:11:23.092 ]' 00:11:23.092 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.352 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.611 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:24.179 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.438 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.697 00:11:24.697 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.697 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.697 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.957 { 00:11:24.957 "cntlid": 55, 00:11:24.957 "qid": 0, 00:11:24.957 "state": "enabled", 00:11:24.957 "thread": "nvmf_tgt_poll_group_000", 00:11:24.957 "listen_address": { 00:11:24.957 "trtype": "TCP", 00:11:24.957 "adrfam": "IPv4", 00:11:24.957 "traddr": "10.0.0.2", 00:11:24.957 "trsvcid": "4420" 00:11:24.957 }, 00:11:24.957 "peer_address": { 00:11:24.957 "trtype": "TCP", 00:11:24.957 "adrfam": "IPv4", 00:11:24.957 "traddr": "10.0.0.1", 00:11:24.957 "trsvcid": "55968" 00:11:24.957 }, 00:11:24.957 "auth": { 00:11:24.957 "state": "completed", 00:11:24.957 "digest": "sha384", 00:11:24.957 "dhgroup": "null" 00:11:24.957 } 00:11:24.957 } 00:11:24.957 ]' 00:11:24.957 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.216 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.475 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:26.043 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.302 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.561 00:11:26.820 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.820 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.820 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.080 { 00:11:27.080 "cntlid": 57, 00:11:27.080 "qid": 0, 00:11:27.080 "state": "enabled", 00:11:27.080 "thread": "nvmf_tgt_poll_group_000", 00:11:27.080 "listen_address": { 00:11:27.080 "trtype": "TCP", 00:11:27.080 "adrfam": "IPv4", 00:11:27.080 "traddr": "10.0.0.2", 00:11:27.080 "trsvcid": "4420" 00:11:27.080 }, 00:11:27.080 "peer_address": { 00:11:27.080 "trtype": "TCP", 00:11:27.080 "adrfam": "IPv4", 00:11:27.080 "traddr": "10.0.0.1", 00:11:27.080 "trsvcid": "55986" 00:11:27.080 }, 00:11:27.080 "auth": { 00:11:27.080 "state": "completed", 00:11:27.080 "digest": "sha384", 00:11:27.080 "dhgroup": "ffdhe2048" 00:11:27.080 } 00:11:27.080 } 00:11:27.080 ]' 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.080 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.340 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.276 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.843 00:11:28.843 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.843 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.843 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.843 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.843 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.843 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.843 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.102 { 00:11:29.102 "cntlid": 59, 00:11:29.102 "qid": 0, 00:11:29.102 "state": "enabled", 00:11:29.102 "thread": "nvmf_tgt_poll_group_000", 00:11:29.102 "listen_address": { 00:11:29.102 "trtype": "TCP", 00:11:29.102 "adrfam": "IPv4", 00:11:29.102 "traddr": "10.0.0.2", 00:11:29.102 "trsvcid": "4420" 00:11:29.102 }, 00:11:29.102 "peer_address": { 00:11:29.102 "trtype": "TCP", 00:11:29.102 "adrfam": "IPv4", 00:11:29.102 "traddr": "10.0.0.1", 00:11:29.102 "trsvcid": "56014" 00:11:29.102 }, 00:11:29.102 "auth": { 00:11:29.102 "state": "completed", 00:11:29.102 "digest": "sha384", 00:11:29.102 "dhgroup": "ffdhe2048" 00:11:29.102 } 00:11:29.102 } 00:11:29.102 ]' 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.102 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.361 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:29.927 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.186 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.444 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.444 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.444 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.703 00:11:30.703 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.703 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.703 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.962 { 00:11:30.962 "cntlid": 61, 00:11:30.962 "qid": 0, 00:11:30.962 "state": "enabled", 00:11:30.962 "thread": "nvmf_tgt_poll_group_000", 00:11:30.962 "listen_address": { 00:11:30.962 "trtype": "TCP", 00:11:30.962 "adrfam": "IPv4", 00:11:30.962 "traddr": "10.0.0.2", 00:11:30.962 "trsvcid": "4420" 00:11:30.962 }, 00:11:30.962 "peer_address": { 00:11:30.962 "trtype": "TCP", 00:11:30.962 "adrfam": "IPv4", 00:11:30.962 "traddr": "10.0.0.1", 00:11:30.962 "trsvcid": "56058" 00:11:30.962 }, 00:11:30.962 "auth": { 00:11:30.962 "state": "completed", 00:11:30.962 "digest": "sha384", 00:11:30.962 "dhgroup": "ffdhe2048" 00:11:30.962 } 00:11:30.962 } 00:11:30.962 ]' 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.962 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.530 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.097 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.356 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.357 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.357 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.357 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.636 00:11:32.636 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.636 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.636 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.911 { 00:11:32.911 "cntlid": 63, 00:11:32.911 "qid": 0, 00:11:32.911 "state": "enabled", 00:11:32.911 "thread": "nvmf_tgt_poll_group_000", 00:11:32.911 "listen_address": { 00:11:32.911 "trtype": "TCP", 00:11:32.911 "adrfam": "IPv4", 00:11:32.911 "traddr": "10.0.0.2", 00:11:32.911 "trsvcid": "4420" 00:11:32.911 }, 00:11:32.911 "peer_address": { 00:11:32.911 "trtype": "TCP", 00:11:32.911 "adrfam": "IPv4", 00:11:32.911 "traddr": "10.0.0.1", 00:11:32.911 "trsvcid": "45970" 00:11:32.911 }, 00:11:32.911 "auth": { 00:11:32.911 "state": "completed", 00:11:32.911 "digest": "sha384", 00:11:32.911 "dhgroup": "ffdhe2048" 00:11:32.911 } 00:11:32.911 } 00:11:32.911 ]' 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.911 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.170 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:33.170 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.170 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.170 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.170 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.428 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:33.994 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.994 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:33.994 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.995 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.995 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.995 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.995 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.995 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:33.995 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.253 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.818 00:11:34.818 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.818 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.818 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.818 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.818 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.818 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.818 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.076 { 00:11:35.076 "cntlid": 65, 00:11:35.076 "qid": 0, 00:11:35.076 "state": "enabled", 00:11:35.076 "thread": "nvmf_tgt_poll_group_000", 00:11:35.076 "listen_address": { 00:11:35.076 "trtype": "TCP", 00:11:35.076 "adrfam": "IPv4", 00:11:35.076 "traddr": "10.0.0.2", 00:11:35.076 "trsvcid": "4420" 00:11:35.076 }, 00:11:35.076 "peer_address": { 00:11:35.076 "trtype": "TCP", 00:11:35.076 "adrfam": "IPv4", 00:11:35.076 "traddr": "10.0.0.1", 00:11:35.076 "trsvcid": "46002" 00:11:35.076 }, 00:11:35.076 "auth": { 00:11:35.076 "state": "completed", 00:11:35.076 "digest": "sha384", 00:11:35.076 "dhgroup": "ffdhe3072" 00:11:35.076 } 00:11:35.076 } 00:11:35.076 ]' 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.076 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.334 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:35.900 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.467 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.725 00:11:36.725 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.725 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.725 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.983 { 00:11:36.983 "cntlid": 67, 00:11:36.983 "qid": 0, 00:11:36.983 "state": "enabled", 00:11:36.983 "thread": "nvmf_tgt_poll_group_000", 00:11:36.983 "listen_address": { 00:11:36.983 "trtype": "TCP", 00:11:36.983 "adrfam": "IPv4", 00:11:36.983 "traddr": "10.0.0.2", 00:11:36.983 "trsvcid": "4420" 00:11:36.983 }, 00:11:36.983 "peer_address": { 00:11:36.983 "trtype": "TCP", 00:11:36.983 "adrfam": "IPv4", 00:11:36.983 "traddr": "10.0.0.1", 00:11:36.983 "trsvcid": "46038" 00:11:36.983 }, 00:11:36.983 "auth": { 00:11:36.983 "state": "completed", 00:11:36.983 "digest": "sha384", 00:11:36.983 "dhgroup": "ffdhe3072" 00:11:36.983 } 00:11:36.983 } 00:11:36.983 ]' 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.983 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.242 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:37.242 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.242 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.242 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.242 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.499 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:38.066 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:38.067 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:38.324 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:38.324 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.324 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.324 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:38.324 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.324 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.325 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.325 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.325 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.325 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.325 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.325 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.890 00:11:38.890 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.890 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.890 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.149 { 00:11:39.149 "cntlid": 69, 00:11:39.149 "qid": 0, 00:11:39.149 "state": "enabled", 00:11:39.149 "thread": "nvmf_tgt_poll_group_000", 00:11:39.149 "listen_address": { 00:11:39.149 "trtype": "TCP", 00:11:39.149 "adrfam": "IPv4", 00:11:39.149 "traddr": "10.0.0.2", 00:11:39.149 "trsvcid": "4420" 00:11:39.149 }, 00:11:39.149 "peer_address": { 00:11:39.149 "trtype": "TCP", 00:11:39.149 "adrfam": "IPv4", 00:11:39.149 "traddr": "10.0.0.1", 00:11:39.149 "trsvcid": "46052" 00:11:39.149 }, 00:11:39.149 "auth": { 00:11:39.149 "state": "completed", 00:11:39.149 "digest": "sha384", 00:11:39.149 "dhgroup": "ffdhe3072" 00:11:39.149 } 00:11:39.149 } 00:11:39.149 ]' 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.149 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.407 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.368 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.935 00:11:40.935 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.935 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.935 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.935 { 00:11:40.935 "cntlid": 71, 00:11:40.935 "qid": 0, 00:11:40.935 "state": "enabled", 00:11:40.935 "thread": "nvmf_tgt_poll_group_000", 00:11:40.935 "listen_address": { 00:11:40.935 "trtype": "TCP", 00:11:40.935 "adrfam": "IPv4", 00:11:40.935 "traddr": "10.0.0.2", 00:11:40.935 "trsvcid": "4420" 00:11:40.935 }, 00:11:40.935 "peer_address": { 00:11:40.935 "trtype": "TCP", 00:11:40.935 "adrfam": "IPv4", 00:11:40.935 "traddr": "10.0.0.1", 00:11:40.935 "trsvcid": "46076" 00:11:40.935 }, 00:11:40.935 "auth": { 00:11:40.935 "state": "completed", 00:11:40.935 "digest": "sha384", 00:11:40.935 "dhgroup": "ffdhe3072" 00:11:40.935 } 00:11:40.935 } 00:11:40.935 ]' 00:11:40.935 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.194 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.452 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:42.019 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.537 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.537 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.537 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.795 00:11:42.795 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.795 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.795 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.054 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.054 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.055 { 00:11:43.055 "cntlid": 73, 00:11:43.055 "qid": 0, 00:11:43.055 "state": "enabled", 00:11:43.055 "thread": "nvmf_tgt_poll_group_000", 00:11:43.055 "listen_address": { 00:11:43.055 "trtype": "TCP", 00:11:43.055 "adrfam": "IPv4", 00:11:43.055 "traddr": "10.0.0.2", 00:11:43.055 "trsvcid": "4420" 00:11:43.055 }, 00:11:43.055 "peer_address": { 00:11:43.055 "trtype": "TCP", 00:11:43.055 "adrfam": "IPv4", 00:11:43.055 "traddr": "10.0.0.1", 00:11:43.055 "trsvcid": "35594" 00:11:43.055 }, 00:11:43.055 "auth": { 00:11:43.055 "state": "completed", 00:11:43.055 "digest": "sha384", 00:11:43.055 "dhgroup": "ffdhe4096" 00:11:43.055 } 00:11:43.055 } 00:11:43.055 ]' 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.055 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.313 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:43.879 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:44.137 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.395 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.653 00:11:44.653 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.653 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.653 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.911 { 00:11:44.911 "cntlid": 75, 00:11:44.911 "qid": 0, 00:11:44.911 "state": "enabled", 00:11:44.911 "thread": "nvmf_tgt_poll_group_000", 00:11:44.911 "listen_address": { 00:11:44.911 "trtype": "TCP", 00:11:44.911 "adrfam": "IPv4", 00:11:44.911 "traddr": "10.0.0.2", 00:11:44.911 "trsvcid": "4420" 00:11:44.911 }, 00:11:44.911 "peer_address": { 00:11:44.911 "trtype": "TCP", 00:11:44.911 "adrfam": "IPv4", 00:11:44.911 "traddr": "10.0.0.1", 00:11:44.911 "trsvcid": "35610" 00:11:44.911 }, 00:11:44.911 "auth": { 00:11:44.911 "state": "completed", 00:11:44.911 "digest": "sha384", 00:11:44.911 "dhgroup": "ffdhe4096" 00:11:44.911 } 00:11:44.911 } 00:11:44.911 ]' 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.911 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.169 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:45.169 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.169 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.169 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.169 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.427 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:45.994 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.253 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.826 00:11:46.826 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.826 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.826 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.826 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.826 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.826 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.826 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.826 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.096 { 00:11:47.096 "cntlid": 77, 00:11:47.096 "qid": 0, 00:11:47.096 "state": "enabled", 00:11:47.096 "thread": "nvmf_tgt_poll_group_000", 00:11:47.096 "listen_address": { 00:11:47.096 "trtype": "TCP", 00:11:47.096 "adrfam": "IPv4", 00:11:47.096 "traddr": "10.0.0.2", 00:11:47.096 "trsvcid": "4420" 00:11:47.096 }, 00:11:47.096 "peer_address": { 00:11:47.096 "trtype": "TCP", 00:11:47.096 "adrfam": "IPv4", 00:11:47.096 "traddr": "10.0.0.1", 00:11:47.096 "trsvcid": "35630" 00:11:47.096 }, 00:11:47.096 "auth": { 00:11:47.096 "state": "completed", 00:11:47.096 "digest": "sha384", 00:11:47.096 "dhgroup": "ffdhe4096" 00:11:47.096 } 00:11:47.096 } 00:11:47.096 ]' 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.096 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.354 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.286 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.853 00:11:48.853 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.853 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.853 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.111 { 00:11:49.111 "cntlid": 79, 00:11:49.111 "qid": 0, 00:11:49.111 "state": "enabled", 00:11:49.111 "thread": "nvmf_tgt_poll_group_000", 00:11:49.111 "listen_address": { 00:11:49.111 "trtype": "TCP", 00:11:49.111 "adrfam": "IPv4", 00:11:49.111 "traddr": "10.0.0.2", 00:11:49.111 "trsvcid": "4420" 00:11:49.111 }, 00:11:49.111 "peer_address": { 00:11:49.111 "trtype": "TCP", 00:11:49.111 "adrfam": "IPv4", 00:11:49.111 "traddr": "10.0.0.1", 00:11:49.111 "trsvcid": "35650" 00:11:49.111 }, 00:11:49.111 "auth": { 00:11:49.111 "state": "completed", 00:11:49.111 "digest": "sha384", 00:11:49.111 "dhgroup": "ffdhe4096" 00:11:49.111 } 00:11:49.111 } 00:11:49.111 ]' 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.111 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.370 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.307 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.874 00:11:50.874 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.874 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.874 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.134 { 00:11:51.134 "cntlid": 81, 00:11:51.134 "qid": 0, 00:11:51.134 "state": "enabled", 00:11:51.134 "thread": "nvmf_tgt_poll_group_000", 00:11:51.134 "listen_address": { 00:11:51.134 "trtype": "TCP", 00:11:51.134 "adrfam": "IPv4", 00:11:51.134 "traddr": "10.0.0.2", 00:11:51.134 "trsvcid": "4420" 00:11:51.134 }, 00:11:51.134 "peer_address": { 00:11:51.134 "trtype": "TCP", 00:11:51.134 "adrfam": "IPv4", 00:11:51.134 "traddr": "10.0.0.1", 00:11:51.134 "trsvcid": "35666" 00:11:51.134 }, 00:11:51.134 "auth": { 00:11:51.134 "state": "completed", 00:11:51.134 "digest": "sha384", 00:11:51.134 "dhgroup": "ffdhe6144" 00:11:51.134 } 00:11:51.134 } 00:11:51.134 ]' 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:51.134 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.392 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.392 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.392 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.650 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:52.216 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.475 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.041 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.041 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.042 { 00:11:53.042 "cntlid": 83, 00:11:53.042 "qid": 0, 00:11:53.042 "state": "enabled", 00:11:53.042 "thread": "nvmf_tgt_poll_group_000", 00:11:53.042 "listen_address": { 00:11:53.042 "trtype": "TCP", 00:11:53.042 "adrfam": "IPv4", 00:11:53.042 "traddr": "10.0.0.2", 00:11:53.042 "trsvcid": "4420" 00:11:53.042 }, 00:11:53.042 "peer_address": { 00:11:53.042 "trtype": "TCP", 00:11:53.042 "adrfam": "IPv4", 00:11:53.042 "traddr": "10.0.0.1", 00:11:53.042 "trsvcid": "43982" 00:11:53.042 }, 00:11:53.042 "auth": { 00:11:53.042 "state": "completed", 00:11:53.042 "digest": "sha384", 00:11:53.042 "dhgroup": "ffdhe6144" 00:11:53.042 } 00:11:53.042 } 00:11:53.042 ]' 00:11:53.042 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.042 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.300 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.301 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:53.301 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.301 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.301 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.301 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.559 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:11:54.127 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.386 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.645 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.903 00:11:54.903 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.903 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.903 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.162 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.162 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.162 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.163 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.163 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.163 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.163 { 00:11:55.163 "cntlid": 85, 00:11:55.163 "qid": 0, 00:11:55.163 "state": "enabled", 00:11:55.163 "thread": "nvmf_tgt_poll_group_000", 00:11:55.163 "listen_address": { 00:11:55.163 "trtype": "TCP", 00:11:55.163 "adrfam": "IPv4", 00:11:55.163 "traddr": "10.0.0.2", 00:11:55.163 "trsvcid": "4420" 00:11:55.163 }, 00:11:55.163 "peer_address": { 00:11:55.163 "trtype": "TCP", 00:11:55.163 "adrfam": "IPv4", 00:11:55.163 "traddr": "10.0.0.1", 00:11:55.163 "trsvcid": "44004" 00:11:55.163 }, 00:11:55.163 "auth": { 00:11:55.163 "state": "completed", 00:11:55.163 "digest": "sha384", 00:11:55.163 "dhgroup": "ffdhe6144" 00:11:55.163 } 00:11:55.163 } 00:11:55.163 ]' 00:11:55.163 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.421 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.678 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:11:56.244 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.244 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:56.244 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.244 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.502 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.502 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.502 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:56.502 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.760 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.018 00:11:57.276 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.276 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.276 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.276 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.534 { 00:11:57.534 "cntlid": 87, 00:11:57.534 "qid": 0, 00:11:57.534 "state": "enabled", 00:11:57.534 "thread": "nvmf_tgt_poll_group_000", 00:11:57.534 "listen_address": { 00:11:57.534 "trtype": "TCP", 00:11:57.534 "adrfam": "IPv4", 00:11:57.534 "traddr": "10.0.0.2", 00:11:57.534 "trsvcid": "4420" 00:11:57.534 }, 00:11:57.534 "peer_address": { 00:11:57.534 "trtype": "TCP", 00:11:57.534 "adrfam": "IPv4", 00:11:57.534 "traddr": "10.0.0.1", 00:11:57.534 "trsvcid": "44026" 00:11:57.534 }, 00:11:57.534 "auth": { 00:11:57.534 "state": "completed", 00:11:57.534 "digest": "sha384", 00:11:57.534 "dhgroup": "ffdhe6144" 00:11:57.534 } 00:11:57.534 } 00:11:57.534 ]' 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.534 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.792 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:58.723 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.723 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.981 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.981 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.981 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.546 00:11:59.546 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.546 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.546 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.804 { 00:11:59.804 "cntlid": 89, 00:11:59.804 "qid": 0, 00:11:59.804 "state": "enabled", 00:11:59.804 "thread": "nvmf_tgt_poll_group_000", 00:11:59.804 "listen_address": { 00:11:59.804 "trtype": "TCP", 00:11:59.804 "adrfam": "IPv4", 00:11:59.804 "traddr": "10.0.0.2", 00:11:59.804 "trsvcid": "4420" 00:11:59.804 }, 00:11:59.804 "peer_address": { 00:11:59.804 "trtype": "TCP", 00:11:59.804 "adrfam": "IPv4", 00:11:59.804 "traddr": "10.0.0.1", 00:11:59.804 "trsvcid": "44056" 00:11:59.804 }, 00:11:59.804 "auth": { 00:11:59.804 "state": "completed", 00:11:59.804 "digest": "sha384", 00:11:59.804 "dhgroup": "ffdhe8192" 00:11:59.804 } 00:11:59.804 } 00:11:59.804 ]' 00:11:59.804 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.804 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.804 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.804 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.804 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.062 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.062 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.062 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.320 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:00.887 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.144 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.145 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.145 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.145 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.076 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.077 { 00:12:02.077 "cntlid": 91, 00:12:02.077 "qid": 0, 00:12:02.077 "state": "enabled", 00:12:02.077 "thread": "nvmf_tgt_poll_group_000", 00:12:02.077 "listen_address": { 00:12:02.077 "trtype": "TCP", 00:12:02.077 "adrfam": "IPv4", 00:12:02.077 "traddr": "10.0.0.2", 00:12:02.077 "trsvcid": "4420" 00:12:02.077 }, 00:12:02.077 "peer_address": { 00:12:02.077 "trtype": "TCP", 00:12:02.077 "adrfam": "IPv4", 00:12:02.077 "traddr": "10.0.0.1", 00:12:02.077 "trsvcid": "44080" 00:12:02.077 }, 00:12:02.077 "auth": { 00:12:02.077 "state": "completed", 00:12:02.077 "digest": "sha384", 00:12:02.077 "dhgroup": "ffdhe8192" 00:12:02.077 } 00:12:02.077 } 00:12:02.077 ]' 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.077 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.335 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.335 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.335 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.335 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.335 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.593 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.159 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.725 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.291 00:12:04.291 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.291 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.291 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.562 { 00:12:04.562 "cntlid": 93, 00:12:04.562 "qid": 0, 00:12:04.562 "state": "enabled", 00:12:04.562 "thread": "nvmf_tgt_poll_group_000", 00:12:04.562 "listen_address": { 00:12:04.562 "trtype": "TCP", 00:12:04.562 "adrfam": "IPv4", 00:12:04.562 "traddr": "10.0.0.2", 00:12:04.562 "trsvcid": "4420" 00:12:04.562 }, 00:12:04.562 "peer_address": { 00:12:04.562 "trtype": "TCP", 00:12:04.562 "adrfam": "IPv4", 00:12:04.562 "traddr": "10.0.0.1", 00:12:04.562 "trsvcid": "34752" 00:12:04.562 }, 00:12:04.562 "auth": { 00:12:04.562 "state": "completed", 00:12:04.562 "digest": "sha384", 00:12:04.562 "dhgroup": "ffdhe8192" 00:12:04.562 } 00:12:04.562 } 00:12:04.562 ]' 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.562 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.834 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:05.768 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:06.028 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:06.594 00:12:06.594 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.594 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.594 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.853 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.853 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.853 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.853 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.853 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.853 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.853 { 00:12:06.853 "cntlid": 95, 00:12:06.853 "qid": 0, 00:12:06.854 "state": "enabled", 00:12:06.854 "thread": "nvmf_tgt_poll_group_000", 00:12:06.854 "listen_address": { 00:12:06.854 "trtype": "TCP", 00:12:06.854 "adrfam": "IPv4", 00:12:06.854 "traddr": "10.0.0.2", 00:12:06.854 "trsvcid": "4420" 00:12:06.854 }, 00:12:06.854 "peer_address": { 00:12:06.854 "trtype": "TCP", 00:12:06.854 "adrfam": "IPv4", 00:12:06.854 "traddr": "10.0.0.1", 00:12:06.854 "trsvcid": "34770" 00:12:06.854 }, 00:12:06.854 "auth": { 00:12:06.854 "state": "completed", 00:12:06.854 "digest": "sha384", 00:12:06.854 "dhgroup": "ffdhe8192" 00:12:06.854 } 00:12:06.854 } 00:12:06.854 ]' 00:12:06.854 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.854 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.854 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.854 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:06.854 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.112 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.112 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.112 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.112 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.047 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.390 00:12:08.390 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.390 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.390 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.648 { 00:12:08.648 "cntlid": 97, 00:12:08.648 "qid": 0, 00:12:08.648 "state": "enabled", 00:12:08.648 "thread": "nvmf_tgt_poll_group_000", 00:12:08.648 "listen_address": { 00:12:08.648 "trtype": "TCP", 00:12:08.648 "adrfam": "IPv4", 00:12:08.648 "traddr": "10.0.0.2", 00:12:08.648 "trsvcid": "4420" 00:12:08.648 }, 00:12:08.648 "peer_address": { 00:12:08.648 "trtype": "TCP", 00:12:08.648 "adrfam": "IPv4", 00:12:08.648 "traddr": "10.0.0.1", 00:12:08.648 "trsvcid": "34780" 00:12:08.648 }, 00:12:08.648 "auth": { 00:12:08.648 "state": "completed", 00:12:08.648 "digest": "sha512", 00:12:08.648 "dhgroup": "null" 00:12:08.648 } 00:12:08.648 } 00:12:08.648 ]' 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:08.648 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.906 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.907 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.907 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.165 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:09.733 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.992 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.251 00:12:10.251 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.251 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.251 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.509 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.509 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.509 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.509 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.509 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.509 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.509 { 00:12:10.509 "cntlid": 99, 00:12:10.509 "qid": 0, 00:12:10.509 "state": "enabled", 00:12:10.509 "thread": "nvmf_tgt_poll_group_000", 00:12:10.509 "listen_address": { 00:12:10.509 "trtype": "TCP", 00:12:10.509 "adrfam": "IPv4", 00:12:10.509 "traddr": "10.0.0.2", 00:12:10.509 "trsvcid": "4420" 00:12:10.509 }, 00:12:10.509 "peer_address": { 00:12:10.509 "trtype": "TCP", 00:12:10.509 "adrfam": "IPv4", 00:12:10.509 "traddr": "10.0.0.1", 00:12:10.509 "trsvcid": "34800" 00:12:10.509 }, 00:12:10.509 "auth": { 00:12:10.509 "state": "completed", 00:12:10.510 "digest": "sha512", 00:12:10.510 "dhgroup": "null" 00:12:10.510 } 00:12:10.510 } 00:12:10.510 ]' 00:12:10.510 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.510 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.510 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.768 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:10.768 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.768 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.768 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.768 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.027 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:11.594 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.853 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.111 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.370 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.629 { 00:12:12.629 "cntlid": 101, 00:12:12.629 "qid": 0, 00:12:12.629 "state": "enabled", 00:12:12.629 "thread": "nvmf_tgt_poll_group_000", 00:12:12.629 "listen_address": { 00:12:12.629 "trtype": "TCP", 00:12:12.629 "adrfam": "IPv4", 00:12:12.629 "traddr": "10.0.0.2", 00:12:12.629 "trsvcid": "4420" 00:12:12.629 }, 00:12:12.629 "peer_address": { 00:12:12.629 "trtype": "TCP", 00:12:12.629 "adrfam": "IPv4", 00:12:12.629 "traddr": "10.0.0.1", 00:12:12.629 "trsvcid": "34844" 00:12:12.629 }, 00:12:12.629 "auth": { 00:12:12.629 "state": "completed", 00:12:12.629 "digest": "sha512", 00:12:12.629 "dhgroup": "null" 00:12:12.629 } 00:12:12.629 } 00:12:12.629 ]' 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.629 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.888 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:13.456 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:13.714 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:13.714 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.714 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.714 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:13.715 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:13.715 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.715 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:13.715 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.715 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.715 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.715 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.715 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.282 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.282 { 00:12:14.282 "cntlid": 103, 00:12:14.282 "qid": 0, 00:12:14.282 "state": "enabled", 00:12:14.282 "thread": "nvmf_tgt_poll_group_000", 00:12:14.282 "listen_address": { 00:12:14.282 "trtype": "TCP", 00:12:14.282 "adrfam": "IPv4", 00:12:14.282 "traddr": "10.0.0.2", 00:12:14.282 "trsvcid": "4420" 00:12:14.282 }, 00:12:14.282 "peer_address": { 00:12:14.282 "trtype": "TCP", 00:12:14.282 "adrfam": "IPv4", 00:12:14.282 "traddr": "10.0.0.1", 00:12:14.282 "trsvcid": "40176" 00:12:14.282 }, 00:12:14.282 "auth": { 00:12:14.282 "state": "completed", 00:12:14.282 "digest": "sha512", 00:12:14.282 "dhgroup": "null" 00:12:14.282 } 00:12:14.282 } 00:12:14.282 ]' 00:12:14.282 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.541 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.799 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:15.367 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.626 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.884 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.884 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.884 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.143 00:12:16.143 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.143 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.143 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.402 { 00:12:16.402 "cntlid": 105, 00:12:16.402 "qid": 0, 00:12:16.402 "state": "enabled", 00:12:16.402 "thread": "nvmf_tgt_poll_group_000", 00:12:16.402 "listen_address": { 00:12:16.402 "trtype": "TCP", 00:12:16.402 "adrfam": "IPv4", 00:12:16.402 "traddr": "10.0.0.2", 00:12:16.402 "trsvcid": "4420" 00:12:16.402 }, 00:12:16.402 "peer_address": { 00:12:16.402 "trtype": "TCP", 00:12:16.402 "adrfam": "IPv4", 00:12:16.402 "traddr": "10.0.0.1", 00:12:16.402 "trsvcid": "40192" 00:12:16.402 }, 00:12:16.402 "auth": { 00:12:16.402 "state": "completed", 00:12:16.402 "digest": "sha512", 00:12:16.402 "dhgroup": "ffdhe2048" 00:12:16.402 } 00:12:16.402 } 00:12:16.402 ]' 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:16.402 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.403 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.403 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.403 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.662 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:17.597 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.855 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.114 00:12:18.114 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.114 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.114 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.372 { 00:12:18.372 "cntlid": 107, 00:12:18.372 "qid": 0, 00:12:18.372 "state": "enabled", 00:12:18.372 "thread": "nvmf_tgt_poll_group_000", 00:12:18.372 "listen_address": { 00:12:18.372 "trtype": "TCP", 00:12:18.372 "adrfam": "IPv4", 00:12:18.372 "traddr": "10.0.0.2", 00:12:18.372 "trsvcid": "4420" 00:12:18.372 }, 00:12:18.372 "peer_address": { 00:12:18.372 "trtype": "TCP", 00:12:18.372 "adrfam": "IPv4", 00:12:18.372 "traddr": "10.0.0.1", 00:12:18.372 "trsvcid": "40224" 00:12:18.372 }, 00:12:18.372 "auth": { 00:12:18.372 "state": "completed", 00:12:18.372 "digest": "sha512", 00:12:18.372 "dhgroup": "ffdhe2048" 00:12:18.372 } 00:12:18.372 } 00:12:18.372 ]' 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.372 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.630 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:19.195 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.453 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.020 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.020 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.279 { 00:12:20.279 "cntlid": 109, 00:12:20.279 "qid": 0, 00:12:20.279 "state": "enabled", 00:12:20.279 "thread": "nvmf_tgt_poll_group_000", 00:12:20.279 "listen_address": { 00:12:20.279 "trtype": "TCP", 00:12:20.279 "adrfam": "IPv4", 00:12:20.279 "traddr": "10.0.0.2", 00:12:20.279 "trsvcid": "4420" 00:12:20.279 }, 00:12:20.279 "peer_address": { 00:12:20.279 "trtype": "TCP", 00:12:20.279 "adrfam": "IPv4", 00:12:20.279 "traddr": "10.0.0.1", 00:12:20.279 "trsvcid": "40258" 00:12:20.279 }, 00:12:20.279 "auth": { 00:12:20.279 "state": "completed", 00:12:20.279 "digest": "sha512", 00:12:20.279 "dhgroup": "ffdhe2048" 00:12:20.279 } 00:12:20.279 } 00:12:20.279 ]' 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.279 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.537 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:21.104 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.363 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.930 00:12:21.930 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.930 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.930 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.930 { 00:12:21.930 "cntlid": 111, 00:12:21.930 "qid": 0, 00:12:21.930 "state": "enabled", 00:12:21.930 "thread": "nvmf_tgt_poll_group_000", 00:12:21.930 "listen_address": { 00:12:21.930 "trtype": "TCP", 00:12:21.930 "adrfam": "IPv4", 00:12:21.930 "traddr": "10.0.0.2", 00:12:21.930 "trsvcid": "4420" 00:12:21.930 }, 00:12:21.930 "peer_address": { 00:12:21.930 "trtype": "TCP", 00:12:21.930 "adrfam": "IPv4", 00:12:21.930 "traddr": "10.0.0.1", 00:12:21.930 "trsvcid": "40284" 00:12:21.930 }, 00:12:21.930 "auth": { 00:12:21.930 "state": "completed", 00:12:21.930 "digest": "sha512", 00:12:21.930 "dhgroup": "ffdhe2048" 00:12:21.930 } 00:12:21.930 } 00:12:21.930 ]' 00:12:21.930 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.188 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.446 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:23.012 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.013 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:23.013 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.013 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.271 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.271 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.271 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.271 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:23.271 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.529 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.788 00:12:23.788 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.788 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.788 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.048 { 00:12:24.048 "cntlid": 113, 00:12:24.048 "qid": 0, 00:12:24.048 "state": "enabled", 00:12:24.048 "thread": "nvmf_tgt_poll_group_000", 00:12:24.048 "listen_address": { 00:12:24.048 "trtype": "TCP", 00:12:24.048 "adrfam": "IPv4", 00:12:24.048 "traddr": "10.0.0.2", 00:12:24.048 "trsvcid": "4420" 00:12:24.048 }, 00:12:24.048 "peer_address": { 00:12:24.048 "trtype": "TCP", 00:12:24.048 "adrfam": "IPv4", 00:12:24.048 "traddr": "10.0.0.1", 00:12:24.048 "trsvcid": "46902" 00:12:24.048 }, 00:12:24.048 "auth": { 00:12:24.048 "state": "completed", 00:12:24.048 "digest": "sha512", 00:12:24.048 "dhgroup": "ffdhe3072" 00:12:24.048 } 00:12:24.048 } 00:12:24.048 ]' 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.048 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.308 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:24.308 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.308 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.308 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.308 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.567 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:25.132 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.395 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.673 00:12:25.673 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.673 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.673 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.931 { 00:12:25.931 "cntlid": 115, 00:12:25.931 "qid": 0, 00:12:25.931 "state": "enabled", 00:12:25.931 "thread": "nvmf_tgt_poll_group_000", 00:12:25.931 "listen_address": { 00:12:25.931 "trtype": "TCP", 00:12:25.931 "adrfam": "IPv4", 00:12:25.931 "traddr": "10.0.0.2", 00:12:25.931 "trsvcid": "4420" 00:12:25.931 }, 00:12:25.931 "peer_address": { 00:12:25.931 "trtype": "TCP", 00:12:25.931 "adrfam": "IPv4", 00:12:25.931 "traddr": "10.0.0.1", 00:12:25.931 "trsvcid": "46926" 00:12:25.931 }, 00:12:25.931 "auth": { 00:12:25.931 "state": "completed", 00:12:25.931 "digest": "sha512", 00:12:25.931 "dhgroup": "ffdhe3072" 00:12:25.931 } 00:12:25.931 } 00:12:25.931 ]' 00:12:25.931 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.189 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.189 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.189 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:26.189 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.189 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.189 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.190 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.448 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:27.384 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.385 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.953 00:12:27.953 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.953 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.953 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.953 { 00:12:27.953 "cntlid": 117, 00:12:27.953 "qid": 0, 00:12:27.953 "state": "enabled", 00:12:27.953 "thread": "nvmf_tgt_poll_group_000", 00:12:27.953 "listen_address": { 00:12:27.953 "trtype": "TCP", 00:12:27.953 "adrfam": "IPv4", 00:12:27.953 "traddr": "10.0.0.2", 00:12:27.953 "trsvcid": "4420" 00:12:27.953 }, 00:12:27.953 "peer_address": { 00:12:27.953 "trtype": "TCP", 00:12:27.953 "adrfam": "IPv4", 00:12:27.953 "traddr": "10.0.0.1", 00:12:27.953 "trsvcid": "46954" 00:12:27.953 }, 00:12:27.953 "auth": { 00:12:27.953 "state": "completed", 00:12:27.953 "digest": "sha512", 00:12:27.953 "dhgroup": "ffdhe3072" 00:12:27.953 } 00:12:27.953 } 00:12:27.953 ]' 00:12:27.953 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.211 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.469 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:29.036 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.294 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:29.861 00:12:29.862 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.862 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.862 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.862 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.862 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.862 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.862 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.120 { 00:12:30.120 "cntlid": 119, 00:12:30.120 "qid": 0, 00:12:30.120 "state": "enabled", 00:12:30.120 "thread": "nvmf_tgt_poll_group_000", 00:12:30.120 "listen_address": { 00:12:30.120 "trtype": "TCP", 00:12:30.120 "adrfam": "IPv4", 00:12:30.120 "traddr": "10.0.0.2", 00:12:30.120 "trsvcid": "4420" 00:12:30.120 }, 00:12:30.120 "peer_address": { 00:12:30.120 "trtype": "TCP", 00:12:30.120 "adrfam": "IPv4", 00:12:30.120 "traddr": "10.0.0.1", 00:12:30.120 "trsvcid": "46976" 00:12:30.120 }, 00:12:30.120 "auth": { 00:12:30.120 "state": "completed", 00:12:30.120 "digest": "sha512", 00:12:30.120 "dhgroup": "ffdhe3072" 00:12:30.120 } 00:12:30.120 } 00:12:30.120 ]' 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.120 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.378 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:30.945 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.945 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:30.945 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.945 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.945 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.945 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.946 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.946 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:30.946 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.204 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.771 00:12:31.772 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.772 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.772 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.030 { 00:12:32.030 "cntlid": 121, 00:12:32.030 "qid": 0, 00:12:32.030 "state": "enabled", 00:12:32.030 "thread": "nvmf_tgt_poll_group_000", 00:12:32.030 "listen_address": { 00:12:32.030 "trtype": "TCP", 00:12:32.030 "adrfam": "IPv4", 00:12:32.030 "traddr": "10.0.0.2", 00:12:32.030 "trsvcid": "4420" 00:12:32.030 }, 00:12:32.030 "peer_address": { 00:12:32.030 "trtype": "TCP", 00:12:32.030 "adrfam": "IPv4", 00:12:32.030 "traddr": "10.0.0.1", 00:12:32.030 "trsvcid": "47000" 00:12:32.030 }, 00:12:32.030 "auth": { 00:12:32.030 "state": "completed", 00:12:32.030 "digest": "sha512", 00:12:32.030 "dhgroup": "ffdhe4096" 00:12:32.030 } 00:12:32.030 } 00:12:32.030 ]' 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.030 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.598 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.165 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.684 00:12:33.684 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.684 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.684 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.943 { 00:12:33.943 "cntlid": 123, 00:12:33.943 "qid": 0, 00:12:33.943 "state": "enabled", 00:12:33.943 "thread": "nvmf_tgt_poll_group_000", 00:12:33.943 "listen_address": { 00:12:33.943 "trtype": "TCP", 00:12:33.943 "adrfam": "IPv4", 00:12:33.943 "traddr": "10.0.0.2", 00:12:33.943 "trsvcid": "4420" 00:12:33.943 }, 00:12:33.943 "peer_address": { 00:12:33.943 "trtype": "TCP", 00:12:33.943 "adrfam": "IPv4", 00:12:33.943 "traddr": "10.0.0.1", 00:12:33.943 "trsvcid": "56504" 00:12:33.943 }, 00:12:33.943 "auth": { 00:12:33.943 "state": "completed", 00:12:33.943 "digest": "sha512", 00:12:33.943 "dhgroup": "ffdhe4096" 00:12:33.943 } 00:12:33.943 } 00:12:33.943 ]' 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.943 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.258 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.258 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.258 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.258 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.258 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.258 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.517 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:35.085 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.085 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:35.085 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.085 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.086 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.086 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.086 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:35.086 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.345 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.603 00:12:35.603 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.603 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.603 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.171 { 00:12:36.171 "cntlid": 125, 00:12:36.171 "qid": 0, 00:12:36.171 "state": "enabled", 00:12:36.171 "thread": "nvmf_tgt_poll_group_000", 00:12:36.171 "listen_address": { 00:12:36.171 "trtype": "TCP", 00:12:36.171 "adrfam": "IPv4", 00:12:36.171 "traddr": "10.0.0.2", 00:12:36.171 "trsvcid": "4420" 00:12:36.171 }, 00:12:36.171 "peer_address": { 00:12:36.171 "trtype": "TCP", 00:12:36.171 "adrfam": "IPv4", 00:12:36.171 "traddr": "10.0.0.1", 00:12:36.171 "trsvcid": "56528" 00:12:36.171 }, 00:12:36.171 "auth": { 00:12:36.171 "state": "completed", 00:12:36.171 "digest": "sha512", 00:12:36.171 "dhgroup": "ffdhe4096" 00:12:36.171 } 00:12:36.171 } 00:12:36.171 ]' 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.171 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.429 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:36.997 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.255 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.514 00:12:37.514 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.514 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.514 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.773 { 00:12:37.773 "cntlid": 127, 00:12:37.773 "qid": 0, 00:12:37.773 "state": "enabled", 00:12:37.773 "thread": "nvmf_tgt_poll_group_000", 00:12:37.773 "listen_address": { 00:12:37.773 "trtype": "TCP", 00:12:37.773 "adrfam": "IPv4", 00:12:37.773 "traddr": "10.0.0.2", 00:12:37.773 "trsvcid": "4420" 00:12:37.773 }, 00:12:37.773 "peer_address": { 00:12:37.773 "trtype": "TCP", 00:12:37.773 "adrfam": "IPv4", 00:12:37.773 "traddr": "10.0.0.1", 00:12:37.773 "trsvcid": "56568" 00:12:37.773 }, 00:12:37.773 "auth": { 00:12:37.773 "state": "completed", 00:12:37.773 "digest": "sha512", 00:12:37.773 "dhgroup": "ffdhe4096" 00:12:37.773 } 00:12:37.773 } 00:12:37.773 ]' 00:12:37.773 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.773 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.773 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.032 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:38.032 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.032 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.032 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.032 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.289 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:38.855 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.113 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.114 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.680 00:12:39.680 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.680 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.680 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.939 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.939 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.939 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.939 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.939 { 00:12:39.939 "cntlid": 129, 00:12:39.939 "qid": 0, 00:12:39.939 "state": "enabled", 00:12:39.939 "thread": "nvmf_tgt_poll_group_000", 00:12:39.939 "listen_address": { 00:12:39.939 "trtype": "TCP", 00:12:39.939 "adrfam": "IPv4", 00:12:39.939 "traddr": "10.0.0.2", 00:12:39.939 "trsvcid": "4420" 00:12:39.939 }, 00:12:39.939 "peer_address": { 00:12:39.939 "trtype": "TCP", 00:12:39.939 "adrfam": "IPv4", 00:12:39.939 "traddr": "10.0.0.1", 00:12:39.939 "trsvcid": "56586" 00:12:39.939 }, 00:12:39.939 "auth": { 00:12:39.939 "state": "completed", 00:12:39.939 "digest": "sha512", 00:12:39.939 "dhgroup": "ffdhe6144" 00:12:39.939 } 00:12:39.939 } 00:12:39.939 ]' 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.939 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.198 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.133 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.699 00:12:41.699 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.699 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.699 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.957 { 00:12:41.957 "cntlid": 131, 00:12:41.957 "qid": 0, 00:12:41.957 "state": "enabled", 00:12:41.957 "thread": "nvmf_tgt_poll_group_000", 00:12:41.957 "listen_address": { 00:12:41.957 "trtype": "TCP", 00:12:41.957 "adrfam": "IPv4", 00:12:41.957 "traddr": "10.0.0.2", 00:12:41.957 "trsvcid": "4420" 00:12:41.957 }, 00:12:41.957 "peer_address": { 00:12:41.957 "trtype": "TCP", 00:12:41.957 "adrfam": "IPv4", 00:12:41.957 "traddr": "10.0.0.1", 00:12:41.957 "trsvcid": "56598" 00:12:41.957 }, 00:12:41.957 "auth": { 00:12:41.957 "state": "completed", 00:12:41.957 "digest": "sha512", 00:12:41.957 "dhgroup": "ffdhe6144" 00:12:41.957 } 00:12:41.957 } 00:12:41.957 ]' 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.957 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.215 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:42.781 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.040 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.607 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.607 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.866 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.866 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.866 { 00:12:43.866 "cntlid": 133, 00:12:43.866 "qid": 0, 00:12:43.866 "state": "enabled", 00:12:43.866 "thread": "nvmf_tgt_poll_group_000", 00:12:43.866 "listen_address": { 00:12:43.866 "trtype": "TCP", 00:12:43.866 "adrfam": "IPv4", 00:12:43.866 "traddr": "10.0.0.2", 00:12:43.866 "trsvcid": "4420" 00:12:43.866 }, 00:12:43.866 "peer_address": { 00:12:43.866 "trtype": "TCP", 00:12:43.866 "adrfam": "IPv4", 00:12:43.866 "traddr": "10.0.0.1", 00:12:43.866 "trsvcid": "33548" 00:12:43.866 }, 00:12:43.866 "auth": { 00:12:43.866 "state": "completed", 00:12:43.866 "digest": "sha512", 00:12:43.866 "dhgroup": "ffdhe6144" 00:12:43.866 } 00:12:43.866 } 00:12:43.866 ]' 00:12:43.866 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.866 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.866 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.866 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.866 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.866 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.866 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.866 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.124 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:44.692 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.950 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.951 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.951 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.517 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.517 { 00:12:45.517 "cntlid": 135, 00:12:45.517 "qid": 0, 00:12:45.517 "state": "enabled", 00:12:45.517 "thread": "nvmf_tgt_poll_group_000", 00:12:45.517 "listen_address": { 00:12:45.517 "trtype": "TCP", 00:12:45.517 "adrfam": "IPv4", 00:12:45.517 "traddr": "10.0.0.2", 00:12:45.517 "trsvcid": "4420" 00:12:45.517 }, 00:12:45.517 "peer_address": { 00:12:45.517 "trtype": "TCP", 00:12:45.517 "adrfam": "IPv4", 00:12:45.517 "traddr": "10.0.0.1", 00:12:45.517 "trsvcid": "33566" 00:12:45.517 }, 00:12:45.517 "auth": { 00:12:45.517 "state": "completed", 00:12:45.517 "digest": "sha512", 00:12:45.517 "dhgroup": "ffdhe6144" 00:12:45.517 } 00:12:45.517 } 00:12:45.517 ]' 00:12:45.517 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.776 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.036 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.628 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.887 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.454 00:12:47.454 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.454 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.454 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.713 { 00:12:47.713 "cntlid": 137, 00:12:47.713 "qid": 0, 00:12:47.713 "state": "enabled", 00:12:47.713 "thread": "nvmf_tgt_poll_group_000", 00:12:47.713 "listen_address": { 00:12:47.713 "trtype": "TCP", 00:12:47.713 "adrfam": "IPv4", 00:12:47.713 "traddr": "10.0.0.2", 00:12:47.713 "trsvcid": "4420" 00:12:47.713 }, 00:12:47.713 "peer_address": { 00:12:47.713 "trtype": "TCP", 00:12:47.713 "adrfam": "IPv4", 00:12:47.713 "traddr": "10.0.0.1", 00:12:47.713 "trsvcid": "33584" 00:12:47.713 }, 00:12:47.713 "auth": { 00:12:47.713 "state": "completed", 00:12:47.713 "digest": "sha512", 00:12:47.713 "dhgroup": "ffdhe8192" 00:12:47.713 } 00:12:47.713 } 00:12:47.713 ]' 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.713 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.713 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.972 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.972 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.972 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.972 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.230 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:48.796 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.055 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.622 00:12:49.622 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.622 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.622 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.881 { 00:12:49.881 "cntlid": 139, 00:12:49.881 "qid": 0, 00:12:49.881 "state": "enabled", 00:12:49.881 "thread": "nvmf_tgt_poll_group_000", 00:12:49.881 "listen_address": { 00:12:49.881 "trtype": "TCP", 00:12:49.881 "adrfam": "IPv4", 00:12:49.881 "traddr": "10.0.0.2", 00:12:49.881 "trsvcid": "4420" 00:12:49.881 }, 00:12:49.881 "peer_address": { 00:12:49.881 "trtype": "TCP", 00:12:49.881 "adrfam": "IPv4", 00:12:49.881 "traddr": "10.0.0.1", 00:12:49.881 "trsvcid": "33618" 00:12:49.881 }, 00:12:49.881 "auth": { 00:12:49.881 "state": "completed", 00:12:49.881 "digest": "sha512", 00:12:49.881 "dhgroup": "ffdhe8192" 00:12:49.881 } 00:12:49.881 } 00:12:49.881 ]' 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.881 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.140 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.140 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.140 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.140 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.140 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.399 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:01:MTQwNzBiMTM0MzJjM2ZmMGVhYzMwNzViNWE4NjFjMGHMuyT8: --dhchap-ctrl-secret DHHC-1:02:NWNhNWJhNzg4ZTMzYzY0ZGNiYTQ4NmZjNzZlN2ZhMmQzMzlmM2E3MmVlMTQ1Y2E4ArbxFw==: 00:12:50.966 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:50.967 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.226 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.794 00:12:51.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.053 { 00:12:52.053 "cntlid": 141, 00:12:52.053 "qid": 0, 00:12:52.053 "state": "enabled", 00:12:52.053 "thread": "nvmf_tgt_poll_group_000", 00:12:52.053 "listen_address": { 00:12:52.053 "trtype": "TCP", 00:12:52.053 "adrfam": "IPv4", 00:12:52.053 "traddr": "10.0.0.2", 00:12:52.053 "trsvcid": "4420" 00:12:52.053 }, 00:12:52.053 "peer_address": { 00:12:52.053 "trtype": "TCP", 00:12:52.053 "adrfam": "IPv4", 00:12:52.053 "traddr": "10.0.0.1", 00:12:52.053 "trsvcid": "33644" 00:12:52.053 }, 00:12:52.053 "auth": { 00:12:52.053 "state": "completed", 00:12:52.053 "digest": "sha512", 00:12:52.053 "dhgroup": "ffdhe8192" 00:12:52.053 } 00:12:52.053 } 00:12:52.053 ]' 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.053 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.312 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:52.312 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.312 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.312 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.312 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.570 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:02:ZjlmZDY4ZmM1YjRkZjNkNzZhYzNmNWNjMDMwYzFiOGM5MjQ1NDg1ZWRkN2QwNDBi29EQRQ==: --dhchap-ctrl-secret DHHC-1:01:ZGY4MWVkMjc3YzJmYWUzNTFmNTExMGM4NWNiNDlhYjd80bU5: 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.138 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.706 00:12:53.706 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.706 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.706 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.965 { 00:12:53.965 "cntlid": 143, 00:12:53.965 "qid": 0, 00:12:53.965 "state": "enabled", 00:12:53.965 "thread": "nvmf_tgt_poll_group_000", 00:12:53.965 "listen_address": { 00:12:53.965 "trtype": "TCP", 00:12:53.965 "adrfam": "IPv4", 00:12:53.965 "traddr": "10.0.0.2", 00:12:53.965 "trsvcid": "4420" 00:12:53.965 }, 00:12:53.965 "peer_address": { 00:12:53.965 "trtype": "TCP", 00:12:53.965 "adrfam": "IPv4", 00:12:53.965 "traddr": "10.0.0.1", 00:12:53.965 "trsvcid": "37802" 00:12:53.965 }, 00:12:53.965 "auth": { 00:12:53.965 "state": "completed", 00:12:53.965 "digest": "sha512", 00:12:53.965 "dhgroup": "ffdhe8192" 00:12:53.965 } 00:12:53.965 } 00:12:53.965 ]' 00:12:53.965 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.224 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.483 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.047 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.306 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.871 00:12:55.871 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.871 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.871 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.129 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.129 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.129 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.129 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.129 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.129 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.129 { 00:12:56.129 "cntlid": 145, 00:12:56.129 "qid": 0, 00:12:56.129 "state": "enabled", 00:12:56.129 "thread": "nvmf_tgt_poll_group_000", 00:12:56.129 "listen_address": { 00:12:56.129 "trtype": "TCP", 00:12:56.129 "adrfam": "IPv4", 00:12:56.129 "traddr": "10.0.0.2", 00:12:56.130 "trsvcid": "4420" 00:12:56.130 }, 00:12:56.130 "peer_address": { 00:12:56.130 "trtype": "TCP", 00:12:56.130 "adrfam": "IPv4", 00:12:56.130 "traddr": "10.0.0.1", 00:12:56.130 "trsvcid": "37838" 00:12:56.130 }, 00:12:56.130 "auth": { 00:12:56.130 "state": "completed", 00:12:56.130 "digest": "sha512", 00:12:56.130 "dhgroup": "ffdhe8192" 00:12:56.130 } 00:12:56.130 } 00:12:56.130 ]' 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.130 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.388 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:00:NzZhYWRhMDYwYzg1MWYwMDJhNjZlZWU1ZTRkY2I3MmM4OTYxZDY2MTA2ZGY0ZTE0TlkHEw==: --dhchap-ctrl-secret DHHC-1:03:ZWEyNDhmZmRhMzc0YjQ3YzMzZWIxNjM0ZjcwNmMxN2MyMmExNGU2YzhjYmVkZWM4ZjZmZTBiYjI1YmIxNWY5Mti8XKk=: 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.325 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.584 request: 00:12:57.584 { 00:12:57.584 "name": "nvme0", 00:12:57.584 "trtype": "tcp", 00:12:57.584 "traddr": "10.0.0.2", 00:12:57.584 "adrfam": "ipv4", 00:12:57.584 "trsvcid": "4420", 00:12:57.584 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:57.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d", 00:12:57.584 "prchk_reftag": false, 00:12:57.584 "prchk_guard": false, 00:12:57.584 "hdgst": false, 00:12:57.584 "ddgst": false, 00:12:57.584 "dhchap_key": "key2", 00:12:57.584 "method": "bdev_nvme_attach_controller", 00:12:57.584 "req_id": 1 00:12:57.584 } 00:12:57.584 Got JSON-RPC error response 00:12:57.584 response: 00:12:57.584 { 00:12:57.584 "code": -5, 00:12:57.584 "message": "Input/output error" 00:12:57.584 } 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.584 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.843 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.843 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.843 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:57.844 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:58.410 request: 00:12:58.410 { 00:12:58.410 "name": "nvme0", 00:12:58.410 "trtype": "tcp", 00:12:58.410 "traddr": "10.0.0.2", 00:12:58.410 "adrfam": "ipv4", 00:12:58.410 "trsvcid": "4420", 00:12:58.410 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d", 00:12:58.411 "prchk_reftag": false, 00:12:58.411 "prchk_guard": false, 00:12:58.411 "hdgst": false, 00:12:58.411 "ddgst": false, 00:12:58.411 "dhchap_key": "key1", 00:12:58.411 "dhchap_ctrlr_key": "ckey2", 00:12:58.411 "method": "bdev_nvme_attach_controller", 00:12:58.411 "req_id": 1 00:12:58.411 } 00:12:58.411 Got JSON-RPC error response 00:12:58.411 response: 00:12:58.411 { 00:12:58.411 "code": -5, 00:12:58.411 "message": "Input/output error" 00:12:58.411 } 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key1 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.411 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.669 request: 00:12:58.669 { 00:12:58.669 "name": "nvme0", 00:12:58.669 "trtype": "tcp", 00:12:58.669 "traddr": "10.0.0.2", 00:12:58.669 "adrfam": "ipv4", 00:12:58.669 "trsvcid": "4420", 00:12:58.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d", 00:12:58.669 "prchk_reftag": false, 00:12:58.669 "prchk_guard": false, 00:12:58.669 "hdgst": false, 00:12:58.669 "ddgst": false, 00:12:58.669 "dhchap_key": "key1", 00:12:58.669 "dhchap_ctrlr_key": "ckey1", 00:12:58.669 "method": "bdev_nvme_attach_controller", 00:12:58.669 "req_id": 1 00:12:58.669 } 00:12:58.669 Got JSON-RPC error response 00:12:58.669 response: 00:12:58.669 { 00:12:58.669 "code": -5, 00:12:58.669 "message": "Input/output error" 00:12:58.669 } 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.669 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 81363 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 81363 ']' 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 81363 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81363 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.927 killing process with pid 81363 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81363' 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 81363 00:12:58.927 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 81363 00:12:58.927 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:58.927 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.927 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.927 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=84340 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 84340 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 84340 ']' 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.928 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 84340 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 84340 ']' 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.861 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.120 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.120 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:00.120 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:00.120 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.120 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.378 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.946 00:13:00.946 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.946 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.946 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.205 { 00:13:01.205 "cntlid": 1, 00:13:01.205 "qid": 0, 00:13:01.205 "state": "enabled", 00:13:01.205 "thread": "nvmf_tgt_poll_group_000", 00:13:01.205 "listen_address": { 00:13:01.205 "trtype": "TCP", 00:13:01.205 "adrfam": "IPv4", 00:13:01.205 "traddr": "10.0.0.2", 00:13:01.205 "trsvcid": "4420" 00:13:01.205 }, 00:13:01.205 "peer_address": { 00:13:01.205 "trtype": "TCP", 00:13:01.205 "adrfam": "IPv4", 00:13:01.205 "traddr": "10.0.0.1", 00:13:01.205 "trsvcid": "37894" 00:13:01.205 }, 00:13:01.205 "auth": { 00:13:01.205 "state": "completed", 00:13:01.205 "digest": "sha512", 00:13:01.205 "dhgroup": "ffdhe8192" 00:13:01.205 } 00:13:01.205 } 00:13:01.205 ]' 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.205 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.464 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid 6f42f786-7175-4746-b686-8365485f4d3d --dhchap-secret DHHC-1:03:ZmUzZTc3OGIyMzFhZjgyNjFlZmJkNjM1Y2U5ZjQzZmE0MTY3YjhkMGMwMzIwODRjNzFjYzAxZjkxNjRlM2E1NOJCoSg=: 00:13:02.031 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.031 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:13:02.031 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.031 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --dhchap-key key3 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:02.032 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.291 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.550 request: 00:13:02.550 { 00:13:02.550 "name": "nvme0", 00:13:02.550 "trtype": "tcp", 00:13:02.550 "traddr": "10.0.0.2", 00:13:02.550 "adrfam": "ipv4", 00:13:02.550 "trsvcid": "4420", 00:13:02.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:02.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d", 00:13:02.550 "prchk_reftag": false, 00:13:02.550 "prchk_guard": false, 00:13:02.550 "hdgst": false, 00:13:02.550 "ddgst": false, 00:13:02.550 "dhchap_key": "key3", 00:13:02.550 "method": "bdev_nvme_attach_controller", 00:13:02.550 "req_id": 1 00:13:02.550 } 00:13:02.550 Got JSON-RPC error response 00:13:02.550 response: 00:13:02.550 { 00:13:02.550 "code": -5, 00:13:02.550 "message": "Input/output error" 00:13:02.550 } 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:02.550 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.809 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.068 request: 00:13:03.068 { 00:13:03.068 "name": "nvme0", 00:13:03.068 "trtype": "tcp", 00:13:03.068 "traddr": "10.0.0.2", 00:13:03.068 "adrfam": "ipv4", 00:13:03.068 "trsvcid": "4420", 00:13:03.068 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:03.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d", 00:13:03.068 "prchk_reftag": false, 00:13:03.068 "prchk_guard": false, 00:13:03.068 "hdgst": false, 00:13:03.068 "ddgst": false, 00:13:03.068 "dhchap_key": "key3", 00:13:03.068 "method": "bdev_nvme_attach_controller", 00:13:03.068 "req_id": 1 00:13:03.068 } 00:13:03.068 Got JSON-RPC error response 00:13:03.068 response: 00:13:03.068 { 00:13:03.068 "code": -5, 00:13:03.068 "message": "Input/output error" 00:13:03.068 } 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:03.068 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:03.327 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:03.586 request: 00:13:03.586 { 00:13:03.586 "name": "nvme0", 00:13:03.586 "trtype": "tcp", 00:13:03.586 "traddr": "10.0.0.2", 00:13:03.586 "adrfam": "ipv4", 00:13:03.586 "trsvcid": "4420", 00:13:03.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:03.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d", 00:13:03.586 "prchk_reftag": false, 00:13:03.586 "prchk_guard": false, 00:13:03.586 "hdgst": false, 00:13:03.586 "ddgst": false, 00:13:03.586 "dhchap_key": "key0", 00:13:03.586 "dhchap_ctrlr_key": "key1", 00:13:03.586 "method": "bdev_nvme_attach_controller", 00:13:03.586 "req_id": 1 00:13:03.586 } 00:13:03.586 Got JSON-RPC error response 00:13:03.586 response: 00:13:03.586 { 00:13:03.586 "code": -5, 00:13:03.586 "message": "Input/output error" 00:13:03.586 } 00:13:03.587 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:03.587 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.587 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:03.587 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.587 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:03.587 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:03.854 00:13:03.854 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:03.854 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.854 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:03.854 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.854 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.854 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81395 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 81395 ']' 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 81395 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81395 00:13:04.426 killing process with pid 81395 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81395' 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 81395 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 81395 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.426 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.426 rmmod nvme_tcp 00:13:04.684 rmmod nvme_fabrics 00:13:04.684 rmmod nvme_keyring 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 84340 ']' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 84340 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 84340 ']' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 84340 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84340 00:13:04.684 killing process with pid 84340 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84340' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 84340 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 84340 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nqZ /tmp/spdk.key-sha256.gfR /tmp/spdk.key-sha384.5tm /tmp/spdk.key-sha512.zG1 /tmp/spdk.key-sha512.4Ek /tmp/spdk.key-sha384.zZg /tmp/spdk.key-sha256.FWE '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:04.684 00:13:04.684 real 2m41.056s 00:13:04.684 user 6m25.920s 00:13:04.684 sys 0m24.214s 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.684 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.684 ************************************ 00:13:04.684 END TEST nvmf_auth_target 00:13:04.684 ************************************ 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.943 ************************************ 00:13:04.943 START TEST nvmf_bdevio_no_huge 00:13:04.943 ************************************ 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:04.943 * Looking for test storage... 00:13:04.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.943 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:04.944 Cannot find device "nvmf_tgt_br" 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.944 Cannot find device "nvmf_tgt_br2" 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:04.944 Cannot find device "nvmf_tgt_br" 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:04.944 Cannot find device "nvmf_tgt_br2" 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:04.944 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:05.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:13:05.202 00:13:05.202 --- 10.0.0.2 ping statistics --- 00:13:05.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.202 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:05.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:05.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:13:05.202 00:13:05.202 --- 10.0.0.3 ping statistics --- 00:13:05.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.202 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:05.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:13:05.202 00:13:05.202 --- 10.0.0.1 ping statistics --- 00:13:05.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.202 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:05.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=84643 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 84643 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 84643 ']' 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.202 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:05.460 [2024-07-25 01:56:20.547896] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:05.460 [2024-07-25 01:56:20.548011] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:05.460 [2024-07-25 01:56:20.689560] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:05.460 [2024-07-25 01:56:20.694210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.718 [2024-07-25 01:56:20.799489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.718 [2024-07-25 01:56:20.800040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.718 [2024-07-25 01:56:20.800071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.718 [2024-07-25 01:56:20.800082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.718 [2024-07-25 01:56:20.800092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.718 [2024-07-25 01:56:20.800289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:05.718 [2024-07-25 01:56:20.800595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:05.718 [2024-07-25 01:56:20.800942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:05.718 [2024-07-25 01:56:20.801210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.718 [2024-07-25 01:56:20.807583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.286 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.286 [2024-07-25 01:56:21.575860] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.544 Malloc0 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.544 [2024-07-25 01:56:21.618531] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:06.544 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:06.545 { 00:13:06.545 "params": { 00:13:06.545 "name": "Nvme$subsystem", 00:13:06.545 "trtype": "$TEST_TRANSPORT", 00:13:06.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:06.545 "adrfam": "ipv4", 00:13:06.545 "trsvcid": "$NVMF_PORT", 00:13:06.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:06.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:06.545 "hdgst": ${hdgst:-false}, 00:13:06.545 "ddgst": ${ddgst:-false} 00:13:06.545 }, 00:13:06.545 "method": "bdev_nvme_attach_controller" 00:13:06.545 } 00:13:06.545 EOF 00:13:06.545 )") 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:06.545 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:06.545 "params": { 00:13:06.545 "name": "Nvme1", 00:13:06.545 "trtype": "tcp", 00:13:06.545 "traddr": "10.0.0.2", 00:13:06.545 "adrfam": "ipv4", 00:13:06.545 "trsvcid": "4420", 00:13:06.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.545 "hdgst": false, 00:13:06.545 "ddgst": false 00:13:06.545 }, 00:13:06.545 "method": "bdev_nvme_attach_controller" 00:13:06.545 }' 00:13:06.545 [2024-07-25 01:56:21.679484] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:06.545 [2024-07-25 01:56:21.679602] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84679 ] 00:13:06.545 [2024-07-25 01:56:21.821302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:06.545 [2024-07-25 01:56:21.824387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.803 [2024-07-25 01:56:21.925339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.803 [2024-07-25 01:56:21.925463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.803 [2024-07-25 01:56:21.925474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.803 [2024-07-25 01:56:21.940153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:06.803 I/O targets: 00:13:06.803 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:06.803 00:13:06.803 00:13:06.803 CUnit - A unit testing framework for C - Version 2.1-3 00:13:06.803 http://cunit.sourceforge.net/ 00:13:06.803 00:13:06.803 00:13:06.803 Suite: bdevio tests on: Nvme1n1 00:13:06.803 Test: blockdev write read block ...passed 00:13:06.803 Test: blockdev write zeroes read block ...passed 00:13:07.061 Test: blockdev write zeroes read no split ...passed 00:13:07.061 Test: blockdev write zeroes read split ...passed 00:13:07.061 Test: blockdev write zeroes read split partial ...passed 00:13:07.061 Test: blockdev reset ...[2024-07-25 01:56:22.129367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:07.061 [2024-07-25 01:56:22.129687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4bad0 (9): Bad file descriptor 00:13:07.061 [2024-07-25 01:56:22.147106] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:07.061 passed 00:13:07.061 Test: blockdev write read 8 blocks ...passed 00:13:07.061 Test: blockdev write read size > 128k ...passed 00:13:07.061 Test: blockdev write read invalid size ...passed 00:13:07.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.061 Test: blockdev write read max offset ...passed 00:13:07.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.061 Test: blockdev writev readv 8 blocks ...passed 00:13:07.061 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.061 Test: blockdev writev readv block ...passed 00:13:07.061 Test: blockdev writev readv size > 128k ...passed 00:13:07.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.061 Test: blockdev comparev and writev ...[2024-07-25 01:56:22.156588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.156814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.156877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.156895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.157202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.157242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.157263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.157276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.157574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.157612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.157632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.157644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.158114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.061 [2024-07-25 01:56:22.158183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:07.061 passed 00:13:07.061 Test: blockdev nvme passthru rw ...passed 00:13:07.061 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:56:22.159160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.061 [2024-07-25 01:56:22.159201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.159321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.061 [2024-07-25 01:56:22.159341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:07.061 [2024-07-25 01:56:22.159445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.061 [2024-07-25 01:56:22.159480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:07.061 passed 00:13:07.061 Test: blockdev nvme admin passthru ...[2024-07-25 01:56:22.159641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.061 [2024-07-25 01:56:22.159667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:07.061 passed 00:13:07.061 Test: blockdev copy ...passed 00:13:07.061 00:13:07.061 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.061 suites 1 1 n/a 0 0 00:13:07.062 tests 23 23 23 0 0 00:13:07.062 asserts 152 152 152 0 n/a 00:13:07.062 00:13:07.062 Elapsed time = 0.162 seconds 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.320 rmmod nvme_tcp 00:13:07.320 rmmod nvme_fabrics 00:13:07.320 rmmod nvme_keyring 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 84643 ']' 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 84643 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 84643 ']' 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 84643 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84643 00:13:07.320 killing process with pid 84643 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84643' 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 84643 00:13:07.320 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 84643 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:07.909 00:13:07.909 real 0m2.897s 00:13:07.909 user 0m9.493s 00:13:07.909 sys 0m1.098s 00:13:07.909 ************************************ 00:13:07.909 END TEST nvmf_bdevio_no_huge 00:13:07.909 ************************************ 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.909 ************************************ 00:13:07.909 START TEST nvmf_tls 00:13:07.909 ************************************ 00:13:07.909 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:07.909 * Looking for test storage... 00:13:07.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:07.910 Cannot find device "nvmf_tgt_br" 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.910 Cannot find device "nvmf_tgt_br2" 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:07.910 Cannot find device "nvmf_tgt_br" 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:07.910 Cannot find device "nvmf_tgt_br2" 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:07.910 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:08.169 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:08.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:13:08.169 00:13:08.169 --- 10.0.0.2 ping statistics --- 00:13:08.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.169 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:08.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:08.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:13:08.170 00:13:08.170 --- 10.0.0.3 ping statistics --- 00:13:08.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.170 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:08.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:13:08.170 00:13:08.170 --- 10.0.0.1 ping statistics --- 00:13:08.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.170 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84858 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84858 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84858 ']' 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.170 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:08.428 [2024-07-25 01:56:23.471442] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:08.428 [2024-07-25 01:56:23.471580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.428 [2024-07-25 01:56:23.596130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:08.428 [2024-07-25 01:56:23.614941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.428 [2024-07-25 01:56:23.657530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.428 [2024-07-25 01:56:23.657591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.428 [2024-07-25 01:56:23.657616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.428 [2024-07-25 01:56:23.657626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.428 [2024-07-25 01:56:23.657635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.428 [2024-07-25 01:56:23.657670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.360 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.360 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:09.360 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.361 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.361 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.361 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.361 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:09.361 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:09.618 true 00:13:09.618 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:09.618 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:09.876 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:09.876 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:09.876 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:09.876 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:09.876 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:10.134 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:10.134 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:10.134 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:10.393 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:10.393 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:10.652 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:10.652 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:10.652 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:10.652 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:10.910 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:10.910 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:10.910 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:11.168 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:11.168 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:11.426 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:11.426 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:11.426 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:11.684 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:11.684 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.XxQDDgm7s4 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GYf9dqhHa2 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:11.943 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:11.944 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.XxQDDgm7s4 00:13:11.944 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GYf9dqhHa2 00:13:11.944 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:12.202 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:12.461 [2024-07-25 01:56:27.700075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:12.461 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.XxQDDgm7s4 00:13:12.461 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XxQDDgm7s4 00:13:12.461 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:12.720 [2024-07-25 01:56:27.930026] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.720 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:12.979 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:13.238 [2024-07-25 01:56:28.346106] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:13.238 [2024-07-25 01:56:28.346318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.238 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:13.496 malloc0 00:13:13.496 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:13.755 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XxQDDgm7s4 00:13:14.014 [2024-07-25 01:56:29.088553] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:14.014 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XxQDDgm7s4 00:13:23.990 Initializing NVMe Controllers 00:13:23.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.990 Initialization complete. Launching workers. 00:13:23.990 ======================================================== 00:13:23.990 Latency(us) 00:13:23.990 Device Information : IOPS MiB/s Average min max 00:13:23.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10356.46 40.45 6181.05 985.89 10569.30 00:13:23.990 ======================================================== 00:13:23.990 Total : 10356.46 40.45 6181.05 985.89 10569.30 00:13:23.990 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XxQDDgm7s4 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XxQDDgm7s4' 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85089 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85089 /var/tmp/bdevperf.sock 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85089 ']' 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.248 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.248 [2024-07-25 01:56:39.354132] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:24.248 [2024-07-25 01:56:39.354454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85089 ] 00:13:24.248 [2024-07-25 01:56:39.483467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:24.248 [2024-07-25 01:56:39.494809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.248 [2024-07-25 01:56:39.529116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.506 [2024-07-25 01:56:39.557921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:24.506 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.506 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:24.506 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XxQDDgm7s4 00:13:24.764 [2024-07-25 01:56:39.831986] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:24.764 [2024-07-25 01:56:39.832127] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:24.764 TLSTESTn1 00:13:24.764 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:24.764 Running I/O for 10 seconds... 00:13:36.971 00:13:36.971 Latency(us) 00:13:36.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.971 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:36.971 Verification LBA range: start 0x0 length 0x2000 00:13:36.971 TLSTESTn1 : 10.02 4158.80 16.25 0.00 0.00 30721.69 6374.87 23712.12 00:13:36.971 =================================================================================================================== 00:13:36.971 Total : 4158.80 16.25 0.00 0.00 30721.69 6374.87 23712.12 00:13:36.971 0 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 85089 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85089 ']' 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85089 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85089 00:13:36.971 killing process with pid 85089 00:13:36.971 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.971 00:13:36.971 Latency(us) 00:13:36.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.971 =================================================================================================================== 00:13:36.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85089' 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85089 00:13:36.971 [2024-07-25 01:56:50.101554] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85089 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GYf9dqhHa2 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GYf9dqhHa2 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GYf9dqhHa2 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GYf9dqhHa2' 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85209 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85209 /var/tmp/bdevperf.sock 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85209 ']' 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.971 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.971 [2024-07-25 01:56:50.315919] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:36.971 [2024-07-25 01:56:50.316376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85209 ] 00:13:36.971 [2024-07-25 01:56:50.451532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.971 [2024-07-25 01:56:50.470434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.971 [2024-07-25 01:56:50.507020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.971 [2024-07-25 01:56:50.536646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.971 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.971 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:36.971 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GYf9dqhHa2 00:13:36.971 [2024-07-25 01:56:51.555133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:36.971 [2024-07-25 01:56:51.555297] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:36.971 [2024-07-25 01:56:51.560273] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:36.971 [2024-07-25 01:56:51.560823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180af00 (107): Transport endpoint is not connected 00:13:36.971 [2024-07-25 01:56:51.561809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180af00 (9): Bad file descriptor 00:13:36.971 [2024-07-25 01:56:51.562805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:36.971 [2024-07-25 01:56:51.562827] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:36.971 [2024-07-25 01:56:51.562875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:36.971 request: 00:13:36.971 { 00:13:36.971 "name": "TLSTEST", 00:13:36.971 "trtype": "tcp", 00:13:36.971 "traddr": "10.0.0.2", 00:13:36.971 "adrfam": "ipv4", 00:13:36.971 "trsvcid": "4420", 00:13:36.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.971 "prchk_reftag": false, 00:13:36.971 "prchk_guard": false, 00:13:36.971 "hdgst": false, 00:13:36.971 "ddgst": false, 00:13:36.971 "psk": "/tmp/tmp.GYf9dqhHa2", 00:13:36.971 "method": "bdev_nvme_attach_controller", 00:13:36.971 "req_id": 1 00:13:36.971 } 00:13:36.971 Got JSON-RPC error response 00:13:36.971 response: 00:13:36.971 { 00:13:36.971 "code": -5, 00:13:36.971 "message": "Input/output error" 00:13:36.971 } 00:13:36.971 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 85209 00:13:36.971 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85209 ']' 00:13:36.971 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85209 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85209 00:13:36.972 killing process with pid 85209 00:13:36.972 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.972 00:13:36.972 Latency(us) 00:13:36.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.972 =================================================================================================================== 00:13:36.972 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85209' 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85209 00:13:36.972 [2024-07-25 01:56:51.610236] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85209 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XxQDDgm7s4 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XxQDDgm7s4 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XxQDDgm7s4 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XxQDDgm7s4' 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85237 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85237 /var/tmp/bdevperf.sock 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85237 ']' 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.972 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.972 [2024-07-25 01:56:51.800701] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:36.972 [2024-07-25 01:56:51.800795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85237 ] 00:13:36.972 [2024-07-25 01:56:51.919285] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.972 [2024-07-25 01:56:51.936494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.972 [2024-07-25 01:56:51.970948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.972 [2024-07-25 01:56:52.001087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.972 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.972 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:36.972 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.XxQDDgm7s4 00:13:37.230 [2024-07-25 01:56:52.291874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.230 [2024-07-25 01:56:52.292305] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:37.230 [2024-07-25 01:56:52.298215] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:37.230 [2024-07-25 01:56:52.298460] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:37.230 [2024-07-25 01:56:52.298648] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-25 01:56:52.298756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0f00 (107): Transport endpoint is not connected 00:13:37.230 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:37.230 [2024-07-25 01:56:52.299745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0f00 (9): Bad file descriptor 00:13:37.231 [2024-07-25 01:56:52.300750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:37.231 [2024-07-25 01:56:52.300946] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:37.231 [2024-07-25 01:56:52.301070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:37.231 request: 00:13:37.231 { 00:13:37.231 "name": "TLSTEST", 00:13:37.231 "trtype": "tcp", 00:13:37.231 "traddr": "10.0.0.2", 00:13:37.231 "adrfam": "ipv4", 00:13:37.231 "trsvcid": "4420", 00:13:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.231 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:37.231 "prchk_reftag": false, 00:13:37.231 "prchk_guard": false, 00:13:37.231 "hdgst": false, 00:13:37.231 "ddgst": false, 00:13:37.231 "psk": "/tmp/tmp.XxQDDgm7s4", 00:13:37.231 "method": "bdev_nvme_attach_controller", 00:13:37.231 "req_id": 1 00:13:37.231 } 00:13:37.231 Got JSON-RPC error response 00:13:37.231 response: 00:13:37.231 { 00:13:37.231 "code": -5, 00:13:37.231 "message": "Input/output error" 00:13:37.231 } 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 85237 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85237 ']' 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85237 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85237 00:13:37.231 killing process with pid 85237 00:13:37.231 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.231 00:13:37.231 Latency(us) 00:13:37.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.231 =================================================================================================================== 00:13:37.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85237' 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85237 00:13:37.231 [2024-07-25 01:56:52.345954] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85237 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XxQDDgm7s4 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XxQDDgm7s4 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XxQDDgm7s4 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XxQDDgm7s4' 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85257 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85257 /var/tmp/bdevperf.sock 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85257 ']' 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.231 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.489 [2024-07-25 01:56:52.533616] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:37.489 [2024-07-25 01:56:52.533686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85257 ] 00:13:37.489 [2024-07-25 01:56:52.651193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:37.489 [2024-07-25 01:56:52.667227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.489 [2024-07-25 01:56:52.703977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.489 [2024-07-25 01:56:52.733136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XxQDDgm7s4 00:13:38.425 [2024-07-25 01:56:53.641731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.425 [2024-07-25 01:56:53.642132] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:38.425 [2024-07-25 01:56:53.648448] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:38.425 [2024-07-25 01:56:53.648672] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:38.425 [2024-07-25 01:56:53.648875] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:38.425 [2024-07-25 01:56:53.649805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522f00 (107): Transport endpoint is not connected 00:13:38.425 [2024-07-25 01:56:53.650797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522f00 (9): Bad file descriptor 00:13:38.425 [2024-07-25 01:56:53.651794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:38.425 [2024-07-25 01:56:53.651960] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:38.425 [2024-07-25 01:56:53.652082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:38.425 request: 00:13:38.425 { 00:13:38.425 "name": "TLSTEST", 00:13:38.425 "trtype": "tcp", 00:13:38.425 "traddr": "10.0.0.2", 00:13:38.425 "adrfam": "ipv4", 00:13:38.425 "trsvcid": "4420", 00:13:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:38.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.425 "prchk_reftag": false, 00:13:38.425 "prchk_guard": false, 00:13:38.425 "hdgst": false, 00:13:38.425 "ddgst": false, 00:13:38.425 "psk": "/tmp/tmp.XxQDDgm7s4", 00:13:38.425 "method": "bdev_nvme_attach_controller", 00:13:38.425 "req_id": 1 00:13:38.425 } 00:13:38.425 Got JSON-RPC error response 00:13:38.425 response: 00:13:38.425 { 00:13:38.425 "code": -5, 00:13:38.425 "message": "Input/output error" 00:13:38.425 } 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 85257 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85257 ']' 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85257 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85257 00:13:38.425 killing process with pid 85257 00:13:38.425 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.425 00:13:38.425 Latency(us) 00:13:38.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.425 =================================================================================================================== 00:13:38.425 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85257' 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85257 00:13:38.425 [2024-07-25 01:56:53.691478] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:38.425 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85257 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85279 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85279 /var/tmp/bdevperf.sock 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85279 ']' 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.684 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.684 [2024-07-25 01:56:53.882942] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:38.684 [2024-07-25 01:56:53.883288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85279 ] 00:13:38.943 [2024-07-25 01:56:54.005958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:38.943 [2024-07-25 01:56:54.024397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.943 [2024-07-25 01:56:54.060594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.943 [2024-07-25 01:56:54.090045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.510 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.510 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:39.510 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:39.769 [2024-07-25 01:56:55.028655] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:39.769 [2024-07-25 01:56:55.029972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x899b20 (9): Bad file descriptor 00:13:39.769 [2024-07-25 01:56:55.030968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:39.769 [2024-07-25 01:56:55.030991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:39.769 [2024-07-25 01:56:55.031038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:39.769 request: 00:13:39.769 { 00:13:39.769 "name": "TLSTEST", 00:13:39.769 "trtype": "tcp", 00:13:39.769 "traddr": "10.0.0.2", 00:13:39.769 "adrfam": "ipv4", 00:13:39.769 "trsvcid": "4420", 00:13:39.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.769 "prchk_reftag": false, 00:13:39.769 "prchk_guard": false, 00:13:39.769 "hdgst": false, 00:13:39.769 "ddgst": false, 00:13:39.769 "method": "bdev_nvme_attach_controller", 00:13:39.769 "req_id": 1 00:13:39.769 } 00:13:39.769 Got JSON-RPC error response 00:13:39.769 response: 00:13:39.769 { 00:13:39.769 "code": -5, 00:13:39.769 "message": "Input/output error" 00:13:39.769 } 00:13:39.769 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 85279 00:13:39.769 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85279 ']' 00:13:39.769 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85279 00:13:39.769 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:39.769 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.769 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85279 00:13:40.028 killing process with pid 85279 00:13:40.028 Received shutdown signal, test time was about 10.000000 seconds 00:13:40.028 00:13:40.028 Latency(us) 00:13:40.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.028 =================================================================================================================== 00:13:40.028 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85279' 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85279 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85279 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 84858 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84858 ']' 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84858 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84858 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84858' 00:13:40.028 killing process with pid 84858 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84858 00:13:40.028 [2024-07-25 01:56:55.236208] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:40.028 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84858 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.jo7FAS4e6c 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.jo7FAS4e6c 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85316 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85316 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85316 ']' 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.288 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.288 [2024-07-25 01:56:55.480896] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:40.288 [2024-07-25 01:56:55.481146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.547 [2024-07-25 01:56:55.599656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:40.547 [2024-07-25 01:56:55.614938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.547 [2024-07-25 01:56:55.647252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.547 [2024-07-25 01:56:55.647329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.547 [2024-07-25 01:56:55.647339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.547 [2024-07-25 01:56:55.647346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.547 [2024-07-25 01:56:55.647352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.547 [2024-07-25 01:56:55.647377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.547 [2024-07-25 01:56:55.674394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.jo7FAS4e6c 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jo7FAS4e6c 00:13:40.547 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:40.806 [2024-07-25 01:56:56.016544] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.806 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:41.065 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:41.324 [2024-07-25 01:56:56.476674] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:41.324 [2024-07-25 01:56:56.476899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.324 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:41.583 malloc0 00:13:41.583 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:41.855 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:13:42.125 [2024-07-25 01:56:57.190996] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jo7FAS4e6c 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jo7FAS4e6c' 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85358 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85358 /var/tmp/bdevperf.sock 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85358 ']' 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.125 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.125 [2024-07-25 01:56:57.261396] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:42.125 [2024-07-25 01:56:57.261496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85358 ] 00:13:42.125 [2024-07-25 01:56:57.379213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.125 [2024-07-25 01:56:57.398322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.383 [2024-07-25 01:56:57.442261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.383 [2024-07-25 01:56:57.475853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.383 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.383 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:42.383 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:13:42.642 [2024-07-25 01:56:57.736489] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.642 [2024-07-25 01:56:57.736600] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:42.642 TLSTESTn1 00:13:42.642 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:42.642 Running I/O for 10 seconds... 00:13:54.848 00:13:54.848 Latency(us) 00:13:54.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.848 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:54.848 Verification LBA range: start 0x0 length 0x2000 00:13:54.848 TLSTESTn1 : 10.01 4312.98 16.85 0.00 0.00 29624.38 5689.72 22878.02 00:13:54.848 =================================================================================================================== 00:13:54.848 Total : 4312.98 16.85 0.00 0.00 29624.38 5689.72 22878.02 00:13:54.848 0 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 85358 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85358 ']' 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85358 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85358 00:13:54.848 killing process with pid 85358 00:13:54.848 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.848 00:13:54.848 Latency(us) 00:13:54.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.848 =================================================================================================================== 00:13:54.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85358' 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85358 00:13:54.848 [2024-07-25 01:57:07.985055] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:54.848 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85358 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.jo7FAS4e6c 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jo7FAS4e6c 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jo7FAS4e6c 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jo7FAS4e6c 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jo7FAS4e6c' 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85485 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85485 /var/tmp/bdevperf.sock 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85485 ']' 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.848 [2024-07-25 01:57:08.186558] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:54.848 [2024-07-25 01:57:08.186649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85485 ] 00:13:54.848 [2024-07-25 01:57:08.308204] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:54.848 [2024-07-25 01:57:08.322509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.848 [2024-07-25 01:57:08.355489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.848 [2024-07-25 01:57:08.382798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:13:54.848 [2024-07-25 01:57:08.669078] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.848 [2024-07-25 01:57:08.669158] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:54.848 [2024-07-25 01:57:08.669168] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.jo7FAS4e6c 00:13:54.848 request: 00:13:54.848 { 00:13:54.848 "name": "TLSTEST", 00:13:54.848 "trtype": "tcp", 00:13:54.848 "traddr": "10.0.0.2", 00:13:54.848 "adrfam": "ipv4", 00:13:54.848 "trsvcid": "4420", 00:13:54.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.848 "prchk_reftag": false, 00:13:54.848 "prchk_guard": false, 00:13:54.848 "hdgst": false, 00:13:54.848 "ddgst": false, 00:13:54.848 "psk": "/tmp/tmp.jo7FAS4e6c", 00:13:54.848 "method": "bdev_nvme_attach_controller", 00:13:54.848 "req_id": 1 00:13:54.848 } 00:13:54.848 Got JSON-RPC error response 00:13:54.848 response: 00:13:54.848 { 00:13:54.848 "code": -1, 00:13:54.848 "message": "Operation not permitted" 00:13:54.848 } 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 85485 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85485 ']' 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85485 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85485 00:13:54.848 killing process with pid 85485 00:13:54.848 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.848 00:13:54.848 Latency(us) 00:13:54.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.848 =================================================================================================================== 00:13:54.848 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:54.848 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85485' 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85485 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85485 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 85316 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85316 ']' 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85316 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85316 00:13:54.849 killing process with pid 85316 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85316' 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85316 00:13:54.849 [2024-07-25 01:57:08.864786] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:54.849 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85316 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85509 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85509 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85509 ']' 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.849 [2024-07-25 01:57:09.061595] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:54.849 [2024-07-25 01:57:09.061694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.849 [2024-07-25 01:57:09.178415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:54.849 [2024-07-25 01:57:09.192911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.849 [2024-07-25 01:57:09.230574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.849 [2024-07-25 01:57:09.230638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.849 [2024-07-25 01:57:09.230649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.849 [2024-07-25 01:57:09.230656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.849 [2024-07-25 01:57:09.230662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.849 [2024-07-25 01:57:09.230684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.849 [2024-07-25 01:57:09.260089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.jo7FAS4e6c 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jo7FAS4e6c 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.jo7FAS4e6c 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jo7FAS4e6c 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.849 [2024-07-25 01:57:09.549951] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.849 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:54.849 [2024-07-25 01:57:10.034062] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.849 [2024-07-25 01:57:10.034323] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.849 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:55.107 malloc0 00:13:55.107 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.365 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:13:55.623 [2024-07-25 01:57:10.768423] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:55.623 [2024-07-25 01:57:10.768480] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:55.623 [2024-07-25 01:57:10.768525] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:55.623 request: 00:13:55.623 { 00:13:55.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.623 "host": "nqn.2016-06.io.spdk:host1", 00:13:55.624 "psk": "/tmp/tmp.jo7FAS4e6c", 00:13:55.624 "method": "nvmf_subsystem_add_host", 00:13:55.624 "req_id": 1 00:13:55.624 } 00:13:55.624 Got JSON-RPC error response 00:13:55.624 response: 00:13:55.624 { 00:13:55.624 "code": -32603, 00:13:55.624 "message": "Internal error" 00:13:55.624 } 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 85509 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85509 ']' 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85509 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85509 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85509' 00:13:55.624 killing process with pid 85509 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85509 00:13:55.624 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85509 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.jo7FAS4e6c 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85560 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85560 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85560 ']' 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.882 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.882 [2024-07-25 01:57:11.015044] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:55.882 [2024-07-25 01:57:11.015155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.882 [2024-07-25 01:57:11.137648] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:55.882 [2024-07-25 01:57:11.151573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.141 [2024-07-25 01:57:11.184802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.141 [2024-07-25 01:57:11.184897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.141 [2024-07-25 01:57:11.184910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.141 [2024-07-25 01:57:11.184918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.141 [2024-07-25 01:57:11.184925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.141 [2024-07-25 01:57:11.184949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.141 [2024-07-25 01:57:11.213677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.jo7FAS4e6c 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jo7FAS4e6c 00:13:56.141 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:56.399 [2024-07-25 01:57:11.548196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.399 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:56.658 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:56.917 [2024-07-25 01:57:12.028367] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:56.917 [2024-07-25 01:57:12.028567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.917 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:57.176 malloc0 00:13:57.176 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:57.434 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:13:57.693 [2024-07-25 01:57:12.835221] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85607 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85607 /var/tmp/bdevperf.sock 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85607 ']' 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.693 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.693 [2024-07-25 01:57:12.913955] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:57.693 [2024-07-25 01:57:12.914053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85607 ] 00:13:57.952 [2024-07-25 01:57:13.039476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:57.952 [2024-07-25 01:57:13.060254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.952 [2024-07-25 01:57:13.104241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.952 [2024-07-25 01:57:13.137781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:58.517 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.517 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:58.518 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:13:58.776 [2024-07-25 01:57:13.970782] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.776 [2024-07-25 01:57:13.970906] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:58.776 TLSTESTn1 00:13:58.776 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:59.342 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:59.342 "subsystems": [ 00:13:59.342 { 00:13:59.342 "subsystem": "keyring", 00:13:59.342 "config": [] 00:13:59.342 }, 00:13:59.342 { 00:13:59.342 "subsystem": "iobuf", 00:13:59.342 "config": [ 00:13:59.342 { 00:13:59.342 "method": "iobuf_set_options", 00:13:59.342 "params": { 00:13:59.342 "small_pool_count": 8192, 00:13:59.342 "large_pool_count": 1024, 00:13:59.342 "small_bufsize": 8192, 00:13:59.342 "large_bufsize": 135168 00:13:59.342 } 00:13:59.342 } 00:13:59.342 ] 00:13:59.342 }, 00:13:59.342 { 00:13:59.342 "subsystem": "sock", 00:13:59.342 "config": [ 00:13:59.342 { 00:13:59.342 "method": "sock_set_default_impl", 00:13:59.342 "params": { 00:13:59.342 "impl_name": "uring" 00:13:59.342 } 00:13:59.342 }, 00:13:59.342 { 00:13:59.342 "method": "sock_impl_set_options", 00:13:59.342 "params": { 00:13:59.342 "impl_name": "ssl", 00:13:59.342 "recv_buf_size": 4096, 00:13:59.342 "send_buf_size": 4096, 00:13:59.342 "enable_recv_pipe": true, 00:13:59.342 "enable_quickack": false, 00:13:59.342 "enable_placement_id": 0, 00:13:59.342 "enable_zerocopy_send_server": true, 00:13:59.342 "enable_zerocopy_send_client": false, 00:13:59.342 "zerocopy_threshold": 0, 00:13:59.342 "tls_version": 0, 00:13:59.342 "enable_ktls": false 00:13:59.342 } 00:13:59.342 }, 00:13:59.342 { 00:13:59.342 "method": "sock_impl_set_options", 00:13:59.342 "params": { 00:13:59.342 "impl_name": "posix", 00:13:59.342 "recv_buf_size": 2097152, 00:13:59.342 "send_buf_size": 2097152, 00:13:59.342 "enable_recv_pipe": true, 00:13:59.342 "enable_quickack": false, 00:13:59.342 "enable_placement_id": 0, 00:13:59.342 "enable_zerocopy_send_server": true, 00:13:59.342 "enable_zerocopy_send_client": false, 00:13:59.342 "zerocopy_threshold": 0, 00:13:59.342 "tls_version": 0, 00:13:59.342 "enable_ktls": false 00:13:59.342 } 00:13:59.342 }, 00:13:59.342 { 00:13:59.342 "method": "sock_impl_set_options", 00:13:59.342 "params": { 00:13:59.342 "impl_name": "uring", 00:13:59.342 "recv_buf_size": 2097152, 00:13:59.342 "send_buf_size": 2097152, 00:13:59.342 "enable_recv_pipe": true, 00:13:59.342 "enable_quickack": false, 00:13:59.342 "enable_placement_id": 0, 00:13:59.342 "enable_zerocopy_send_server": false, 00:13:59.342 "enable_zerocopy_send_client": false, 00:13:59.342 "zerocopy_threshold": 0, 00:13:59.342 "tls_version": 0, 00:13:59.343 "enable_ktls": false 00:13:59.343 } 00:13:59.343 } 00:13:59.343 ] 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "subsystem": "vmd", 00:13:59.343 "config": [] 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "subsystem": "accel", 00:13:59.343 "config": [ 00:13:59.343 { 00:13:59.343 "method": "accel_set_options", 00:13:59.343 "params": { 00:13:59.343 "small_cache_size": 128, 00:13:59.343 "large_cache_size": 16, 00:13:59.343 "task_count": 2048, 00:13:59.343 "sequence_count": 2048, 00:13:59.343 "buf_count": 2048 00:13:59.343 } 00:13:59.343 } 00:13:59.343 ] 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "subsystem": "bdev", 00:13:59.343 "config": [ 00:13:59.343 { 00:13:59.343 "method": "bdev_set_options", 00:13:59.343 "params": { 00:13:59.343 "bdev_io_pool_size": 65535, 00:13:59.343 "bdev_io_cache_size": 256, 00:13:59.343 "bdev_auto_examine": true, 00:13:59.343 "iobuf_small_cache_size": 128, 00:13:59.343 "iobuf_large_cache_size": 16 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "bdev_raid_set_options", 00:13:59.343 "params": { 00:13:59.343 "process_window_size_kb": 1024, 00:13:59.343 "process_max_bandwidth_mb_sec": 0 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "bdev_iscsi_set_options", 00:13:59.343 "params": { 00:13:59.343 "timeout_sec": 30 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "bdev_nvme_set_options", 00:13:59.343 "params": { 00:13:59.343 "action_on_timeout": "none", 00:13:59.343 "timeout_us": 0, 00:13:59.343 "timeout_admin_us": 0, 00:13:59.343 "keep_alive_timeout_ms": 10000, 00:13:59.343 "arbitration_burst": 0, 00:13:59.343 "low_priority_weight": 0, 00:13:59.343 "medium_priority_weight": 0, 00:13:59.343 "high_priority_weight": 0, 00:13:59.343 "nvme_adminq_poll_period_us": 10000, 00:13:59.343 "nvme_ioq_poll_period_us": 0, 00:13:59.343 "io_queue_requests": 0, 00:13:59.343 "delay_cmd_submit": true, 00:13:59.343 "transport_retry_count": 4, 00:13:59.343 "bdev_retry_count": 3, 00:13:59.343 "transport_ack_timeout": 0, 00:13:59.343 "ctrlr_loss_timeout_sec": 0, 00:13:59.343 "reconnect_delay_sec": 0, 00:13:59.343 "fast_io_fail_timeout_sec": 0, 00:13:59.343 "disable_auto_failback": false, 00:13:59.343 "generate_uuids": false, 00:13:59.343 "transport_tos": 0, 00:13:59.343 "nvme_error_stat": false, 00:13:59.343 "rdma_srq_size": 0, 00:13:59.343 "io_path_stat": false, 00:13:59.343 "allow_accel_sequence": false, 00:13:59.343 "rdma_max_cq_size": 0, 00:13:59.343 "rdma_cm_event_timeout_ms": 0, 00:13:59.343 "dhchap_digests": [ 00:13:59.343 "sha256", 00:13:59.343 "sha384", 00:13:59.343 "sha512" 00:13:59.343 ], 00:13:59.343 "dhchap_dhgroups": [ 00:13:59.343 "null", 00:13:59.343 "ffdhe2048", 00:13:59.343 "ffdhe3072", 00:13:59.343 "ffdhe4096", 00:13:59.343 "ffdhe6144", 00:13:59.343 "ffdhe8192" 00:13:59.343 ] 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "bdev_nvme_set_hotplug", 00:13:59.343 "params": { 00:13:59.343 "period_us": 100000, 00:13:59.343 "enable": false 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "bdev_malloc_create", 00:13:59.343 "params": { 00:13:59.343 "name": "malloc0", 00:13:59.343 "num_blocks": 8192, 00:13:59.343 "block_size": 4096, 00:13:59.343 "physical_block_size": 4096, 00:13:59.343 "uuid": "5effd689-d977-48ad-96aa-89e230add063", 00:13:59.343 "optimal_io_boundary": 0, 00:13:59.343 "md_size": 0, 00:13:59.343 "dif_type": 0, 00:13:59.343 "dif_is_head_of_md": false, 00:13:59.343 "dif_pi_format": 0 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "bdev_wait_for_examine" 00:13:59.343 } 00:13:59.343 ] 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "subsystem": "nbd", 00:13:59.343 "config": [] 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "subsystem": "scheduler", 00:13:59.343 "config": [ 00:13:59.343 { 00:13:59.343 "method": "framework_set_scheduler", 00:13:59.343 "params": { 00:13:59.343 "name": "static" 00:13:59.343 } 00:13:59.343 } 00:13:59.343 ] 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "subsystem": "nvmf", 00:13:59.343 "config": [ 00:13:59.343 { 00:13:59.343 "method": "nvmf_set_config", 00:13:59.343 "params": { 00:13:59.343 "discovery_filter": "match_any", 00:13:59.343 "admin_cmd_passthru": { 00:13:59.343 "identify_ctrlr": false 00:13:59.343 } 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_set_max_subsystems", 00:13:59.343 "params": { 00:13:59.343 "max_subsystems": 1024 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_set_crdt", 00:13:59.343 "params": { 00:13:59.343 "crdt1": 0, 00:13:59.343 "crdt2": 0, 00:13:59.343 "crdt3": 0 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_create_transport", 00:13:59.343 "params": { 00:13:59.343 "trtype": "TCP", 00:13:59.343 "max_queue_depth": 128, 00:13:59.343 "max_io_qpairs_per_ctrlr": 127, 00:13:59.343 "in_capsule_data_size": 4096, 00:13:59.343 "max_io_size": 131072, 00:13:59.343 "io_unit_size": 131072, 00:13:59.343 "max_aq_depth": 128, 00:13:59.343 "num_shared_buffers": 511, 00:13:59.343 "buf_cache_size": 4294967295, 00:13:59.343 "dif_insert_or_strip": false, 00:13:59.343 "zcopy": false, 00:13:59.343 "c2h_success": false, 00:13:59.343 "sock_priority": 0, 00:13:59.343 "abort_timeout_sec": 1, 00:13:59.343 "ack_timeout": 0, 00:13:59.343 "data_wr_pool_size": 0 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_create_subsystem", 00:13:59.343 "params": { 00:13:59.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.343 "allow_any_host": false, 00:13:59.343 "serial_number": "SPDK00000000000001", 00:13:59.343 "model_number": "SPDK bdev Controller", 00:13:59.343 "max_namespaces": 10, 00:13:59.343 "min_cntlid": 1, 00:13:59.343 "max_cntlid": 65519, 00:13:59.343 "ana_reporting": false 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_subsystem_add_host", 00:13:59.343 "params": { 00:13:59.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.343 "host": "nqn.2016-06.io.spdk:host1", 00:13:59.343 "psk": "/tmp/tmp.jo7FAS4e6c" 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_subsystem_add_ns", 00:13:59.343 "params": { 00:13:59.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.343 "namespace": { 00:13:59.343 "nsid": 1, 00:13:59.343 "bdev_name": "malloc0", 00:13:59.343 "nguid": "5EFFD689D97748AD96AA89E230ADD063", 00:13:59.343 "uuid": "5effd689-d977-48ad-96aa-89e230add063", 00:13:59.343 "no_auto_visible": false 00:13:59.343 } 00:13:59.343 } 00:13:59.343 }, 00:13:59.343 { 00:13:59.343 "method": "nvmf_subsystem_add_listener", 00:13:59.343 "params": { 00:13:59.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.343 "listen_address": { 00:13:59.343 "trtype": "TCP", 00:13:59.343 "adrfam": "IPv4", 00:13:59.343 "traddr": "10.0.0.2", 00:13:59.343 "trsvcid": "4420" 00:13:59.343 }, 00:13:59.343 "secure_channel": true 00:13:59.343 } 00:13:59.343 } 00:13:59.343 ] 00:13:59.343 } 00:13:59.343 ] 00:13:59.343 }' 00:13:59.343 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:59.602 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:59.602 "subsystems": [ 00:13:59.602 { 00:13:59.602 "subsystem": "keyring", 00:13:59.602 "config": [] 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "subsystem": "iobuf", 00:13:59.602 "config": [ 00:13:59.602 { 00:13:59.602 "method": "iobuf_set_options", 00:13:59.602 "params": { 00:13:59.602 "small_pool_count": 8192, 00:13:59.602 "large_pool_count": 1024, 00:13:59.602 "small_bufsize": 8192, 00:13:59.602 "large_bufsize": 135168 00:13:59.602 } 00:13:59.602 } 00:13:59.602 ] 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "subsystem": "sock", 00:13:59.602 "config": [ 00:13:59.602 { 00:13:59.602 "method": "sock_set_default_impl", 00:13:59.602 "params": { 00:13:59.602 "impl_name": "uring" 00:13:59.602 } 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "method": "sock_impl_set_options", 00:13:59.602 "params": { 00:13:59.602 "impl_name": "ssl", 00:13:59.602 "recv_buf_size": 4096, 00:13:59.602 "send_buf_size": 4096, 00:13:59.602 "enable_recv_pipe": true, 00:13:59.602 "enable_quickack": false, 00:13:59.602 "enable_placement_id": 0, 00:13:59.602 "enable_zerocopy_send_server": true, 00:13:59.602 "enable_zerocopy_send_client": false, 00:13:59.602 "zerocopy_threshold": 0, 00:13:59.602 "tls_version": 0, 00:13:59.602 "enable_ktls": false 00:13:59.602 } 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "method": "sock_impl_set_options", 00:13:59.602 "params": { 00:13:59.602 "impl_name": "posix", 00:13:59.602 "recv_buf_size": 2097152, 00:13:59.602 "send_buf_size": 2097152, 00:13:59.602 "enable_recv_pipe": true, 00:13:59.602 "enable_quickack": false, 00:13:59.602 "enable_placement_id": 0, 00:13:59.602 "enable_zerocopy_send_server": true, 00:13:59.602 "enable_zerocopy_send_client": false, 00:13:59.602 "zerocopy_threshold": 0, 00:13:59.602 "tls_version": 0, 00:13:59.602 "enable_ktls": false 00:13:59.602 } 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "method": "sock_impl_set_options", 00:13:59.602 "params": { 00:13:59.602 "impl_name": "uring", 00:13:59.602 "recv_buf_size": 2097152, 00:13:59.602 "send_buf_size": 2097152, 00:13:59.602 "enable_recv_pipe": true, 00:13:59.602 "enable_quickack": false, 00:13:59.602 "enable_placement_id": 0, 00:13:59.602 "enable_zerocopy_send_server": false, 00:13:59.602 "enable_zerocopy_send_client": false, 00:13:59.602 "zerocopy_threshold": 0, 00:13:59.602 "tls_version": 0, 00:13:59.602 "enable_ktls": false 00:13:59.602 } 00:13:59.602 } 00:13:59.602 ] 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "subsystem": "vmd", 00:13:59.602 "config": [] 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "subsystem": "accel", 00:13:59.602 "config": [ 00:13:59.602 { 00:13:59.602 "method": "accel_set_options", 00:13:59.602 "params": { 00:13:59.602 "small_cache_size": 128, 00:13:59.602 "large_cache_size": 16, 00:13:59.602 "task_count": 2048, 00:13:59.602 "sequence_count": 2048, 00:13:59.602 "buf_count": 2048 00:13:59.602 } 00:13:59.602 } 00:13:59.602 ] 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "subsystem": "bdev", 00:13:59.602 "config": [ 00:13:59.602 { 00:13:59.602 "method": "bdev_set_options", 00:13:59.602 "params": { 00:13:59.602 "bdev_io_pool_size": 65535, 00:13:59.602 "bdev_io_cache_size": 256, 00:13:59.602 "bdev_auto_examine": true, 00:13:59.602 "iobuf_small_cache_size": 128, 00:13:59.602 "iobuf_large_cache_size": 16 00:13:59.602 } 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "method": "bdev_raid_set_options", 00:13:59.602 "params": { 00:13:59.602 "process_window_size_kb": 1024, 00:13:59.602 "process_max_bandwidth_mb_sec": 0 00:13:59.602 } 00:13:59.602 }, 00:13:59.602 { 00:13:59.602 "method": "bdev_iscsi_set_options", 00:13:59.602 "params": { 00:13:59.602 "timeout_sec": 30 00:13:59.602 } 00:13:59.602 }, 00:13:59.602 { 00:13:59.603 "method": "bdev_nvme_set_options", 00:13:59.603 "params": { 00:13:59.603 "action_on_timeout": "none", 00:13:59.603 "timeout_us": 0, 00:13:59.603 "timeout_admin_us": 0, 00:13:59.603 "keep_alive_timeout_ms": 10000, 00:13:59.603 "arbitration_burst": 0, 00:13:59.603 "low_priority_weight": 0, 00:13:59.603 "medium_priority_weight": 0, 00:13:59.603 "high_priority_weight": 0, 00:13:59.603 "nvme_adminq_poll_period_us": 10000, 00:13:59.603 "nvme_ioq_poll_period_us": 0, 00:13:59.603 "io_queue_requests": 512, 00:13:59.603 "delay_cmd_submit": true, 00:13:59.603 "transport_retry_count": 4, 00:13:59.603 "bdev_retry_count": 3, 00:13:59.603 "transport_ack_timeout": 0, 00:13:59.603 "ctrlr_loss_timeout_sec": 0, 00:13:59.603 "reconnect_delay_sec": 0, 00:13:59.603 "fast_io_fail_timeout_sec": 0, 00:13:59.603 "disable_auto_failback": false, 00:13:59.603 "generate_uuids": false, 00:13:59.603 "transport_tos": 0, 00:13:59.603 "nvme_error_stat": false, 00:13:59.603 "rdma_srq_size": 0, 00:13:59.603 "io_path_stat": false, 00:13:59.603 "allow_accel_sequence": false, 00:13:59.603 "rdma_max_cq_size": 0, 00:13:59.603 "rdma_cm_event_timeout_ms": 0, 00:13:59.603 "dhchap_digests": [ 00:13:59.603 "sha256", 00:13:59.603 "sha384", 00:13:59.603 "sha512" 00:13:59.603 ], 00:13:59.603 "dhchap_dhgroups": [ 00:13:59.603 "null", 00:13:59.603 "ffdhe2048", 00:13:59.603 "ffdhe3072", 00:13:59.603 "ffdhe4096", 00:13:59.603 "ffdhe6144", 00:13:59.603 "ffdhe8192" 00:13:59.603 ] 00:13:59.603 } 00:13:59.603 }, 00:13:59.603 { 00:13:59.603 "method": "bdev_nvme_attach_controller", 00:13:59.603 "params": { 00:13:59.603 "name": "TLSTEST", 00:13:59.603 "trtype": "TCP", 00:13:59.603 "adrfam": "IPv4", 00:13:59.603 "traddr": "10.0.0.2", 00:13:59.603 "trsvcid": "4420", 00:13:59.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.603 "prchk_reftag": false, 00:13:59.603 "prchk_guard": false, 00:13:59.603 "ctrlr_loss_timeout_sec": 0, 00:13:59.603 "reconnect_delay_sec": 0, 00:13:59.603 "fast_io_fail_timeout_sec": 0, 00:13:59.603 "psk": "/tmp/tmp.jo7FAS4e6c", 00:13:59.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.603 "hdgst": false, 00:13:59.603 "ddgst": false 00:13:59.603 } 00:13:59.603 }, 00:13:59.603 { 00:13:59.603 "method": "bdev_nvme_set_hotplug", 00:13:59.603 "params": { 00:13:59.603 "period_us": 100000, 00:13:59.603 "enable": false 00:13:59.603 } 00:13:59.603 }, 00:13:59.603 { 00:13:59.603 "method": "bdev_wait_for_examine" 00:13:59.603 } 00:13:59.603 ] 00:13:59.603 }, 00:13:59.603 { 00:13:59.603 "subsystem": "nbd", 00:13:59.603 "config": [] 00:13:59.603 } 00:13:59.603 ] 00:13:59.603 }' 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 85607 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85607 ']' 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85607 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85607 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:59.603 killing process with pid 85607 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85607' 00:13:59.603 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.603 00:13:59.603 Latency(us) 00:13:59.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.603 =================================================================================================================== 00:13:59.603 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85607 00:13:59.603 [2024-07-25 01:57:14.741443] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85607 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 85560 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85560 ']' 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85560 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.603 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85560 00:13:59.862 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:59.862 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:59.862 killing process with pid 85560 00:13:59.862 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85560' 00:13:59.862 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85560 00:13:59.862 [2024-07-25 01:57:14.906492] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:59.862 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85560 00:13:59.862 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:59.862 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:59.862 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.862 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:59.862 "subsystems": [ 00:13:59.862 { 00:13:59.862 "subsystem": "keyring", 00:13:59.862 "config": [] 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "subsystem": "iobuf", 00:13:59.862 "config": [ 00:13:59.862 { 00:13:59.862 "method": "iobuf_set_options", 00:13:59.862 "params": { 00:13:59.862 "small_pool_count": 8192, 00:13:59.862 "large_pool_count": 1024, 00:13:59.862 "small_bufsize": 8192, 00:13:59.862 "large_bufsize": 135168 00:13:59.862 } 00:13:59.862 } 00:13:59.862 ] 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "subsystem": "sock", 00:13:59.862 "config": [ 00:13:59.862 { 00:13:59.862 "method": "sock_set_default_impl", 00:13:59.862 "params": { 00:13:59.862 "impl_name": "uring" 00:13:59.862 } 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "method": "sock_impl_set_options", 00:13:59.862 "params": { 00:13:59.862 "impl_name": "ssl", 00:13:59.862 "recv_buf_size": 4096, 00:13:59.862 "send_buf_size": 4096, 00:13:59.862 "enable_recv_pipe": true, 00:13:59.862 "enable_quickack": false, 00:13:59.862 "enable_placement_id": 0, 00:13:59.862 "enable_zerocopy_send_server": true, 00:13:59.862 "enable_zerocopy_send_client": false, 00:13:59.862 "zerocopy_threshold": 0, 00:13:59.862 "tls_version": 0, 00:13:59.862 "enable_ktls": false 00:13:59.862 } 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "method": "sock_impl_set_options", 00:13:59.862 "params": { 00:13:59.862 "impl_name": "posix", 00:13:59.862 "recv_buf_size": 2097152, 00:13:59.862 "send_buf_size": 2097152, 00:13:59.862 "enable_recv_pipe": true, 00:13:59.862 "enable_quickack": false, 00:13:59.862 "enable_placement_id": 0, 00:13:59.862 "enable_zerocopy_send_server": true, 00:13:59.862 "enable_zerocopy_send_client": false, 00:13:59.862 "zerocopy_threshold": 0, 00:13:59.862 "tls_version": 0, 00:13:59.862 "enable_ktls": false 00:13:59.862 } 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "method": "sock_impl_set_options", 00:13:59.862 "params": { 00:13:59.862 "impl_name": "uring", 00:13:59.862 "recv_buf_size": 2097152, 00:13:59.862 "send_buf_size": 2097152, 00:13:59.862 "enable_recv_pipe": true, 00:13:59.862 "enable_quickack": false, 00:13:59.862 "enable_placement_id": 0, 00:13:59.862 "enable_zerocopy_send_server": false, 00:13:59.862 "enable_zerocopy_send_client": false, 00:13:59.862 "zerocopy_threshold": 0, 00:13:59.862 "tls_version": 0, 00:13:59.862 "enable_ktls": false 00:13:59.862 } 00:13:59.862 } 00:13:59.862 ] 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "subsystem": "vmd", 00:13:59.862 "config": [] 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "subsystem": "accel", 00:13:59.862 "config": [ 00:13:59.862 { 00:13:59.862 "method": "accel_set_options", 00:13:59.862 "params": { 00:13:59.862 "small_cache_size": 128, 00:13:59.862 "large_cache_size": 16, 00:13:59.862 "task_count": 2048, 00:13:59.862 "sequence_count": 2048, 00:13:59.862 "buf_count": 2048 00:13:59.862 } 00:13:59.862 } 00:13:59.862 ] 00:13:59.862 }, 00:13:59.862 { 00:13:59.862 "subsystem": "bdev", 00:13:59.862 "config": [ 00:13:59.862 { 00:13:59.862 "method": "bdev_set_options", 00:13:59.862 "params": { 00:13:59.862 "bdev_io_pool_size": 65535, 00:13:59.862 "bdev_io_cache_size": 256, 00:13:59.862 "bdev_auto_examine": true, 00:13:59.862 "iobuf_small_cache_size": 128, 00:13:59.863 "iobuf_large_cache_size": 16 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "bdev_raid_set_options", 00:13:59.863 "params": { 00:13:59.863 "process_window_size_kb": 1024, 00:13:59.863 "process_max_bandwidth_mb_sec": 0 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "bdev_iscsi_set_options", 00:13:59.863 "params": { 00:13:59.863 "timeout_sec": 30 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "bdev_nvme_set_options", 00:13:59.863 "params": { 00:13:59.863 "action_on_timeout": "none", 00:13:59.863 "timeout_us": 0, 00:13:59.863 "timeout_admin_us": 0, 00:13:59.863 "keep_alive_timeout_ms": 10000, 00:13:59.863 "arbitration_burst": 0, 00:13:59.863 "low_priority_weight": 0, 00:13:59.863 "medium_priority_weight": 0, 00:13:59.863 "high_priority_weight": 0, 00:13:59.863 "nvme_adminq_poll_period_us": 10000, 00:13:59.863 "nvme_ioq_poll_period_us": 0, 00:13:59.863 "io_queue_requests": 0, 00:13:59.863 "delay_cmd_submit": true, 00:13:59.863 "transport_retry_count": 4, 00:13:59.863 "bdev_retry_count": 3, 00:13:59.863 "transport_ack_timeout": 0, 00:13:59.863 "ctrlr_loss_timeout_sec": 0, 00:13:59.863 "reconnect_delay_sec": 0, 00:13:59.863 "fast_io_fail_timeout_sec": 0, 00:13:59.863 "disable_auto_failback": false, 00:13:59.863 "generate_uuids": false, 00:13:59.863 "transport_tos": 0, 00:13:59.863 "nvme_error_stat": false, 00:13:59.863 "rdma_srq_size": 0, 00:13:59.863 "io_path_stat": false, 00:13:59.863 "allow_accel_sequence": false, 00:13:59.863 "rdma_max_cq_size": 0, 00:13:59.863 "rdma_cm_event_timeout_ms": 0, 00:13:59.863 "dhchap_digests": [ 00:13:59.863 "sha256", 00:13:59.863 "sha384", 00:13:59.863 "sha512" 00:13:59.863 ], 00:13:59.863 "dhchap_dhgroups": [ 00:13:59.863 "null", 00:13:59.863 "ffdhe2048", 00:13:59.863 "ffdhe3072", 00:13:59.863 "ffdhe4096", 00:13:59.863 "ffdhe6144", 00:13:59.863 "ffdhe8192" 00:13:59.863 ] 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "bdev_nvme_set_hotplug", 00:13:59.863 "params": { 00:13:59.863 "period_us": 100000, 00:13:59.863 "enable": false 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "bdev_malloc_create", 00:13:59.863 "params": { 00:13:59.863 "name": "malloc0", 00:13:59.863 "num_blocks": 8192, 00:13:59.863 "block_size": 4096, 00:13:59.863 "physical_block_size": 4096, 00:13:59.863 "uuid": "5effd689-d977-48ad-96aa-89e230add063", 00:13:59.863 "optimal_io_boundary": 0, 00:13:59.863 "md_size": 0, 00:13:59.863 "dif_type": 0, 00:13:59.863 "dif_is_head_of_md": false, 00:13:59.863 "dif_pi_format": 0 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "bdev_wait_for_examine" 00:13:59.863 } 00:13:59.863 ] 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "subsystem": "nbd", 00:13:59.863 "config": [] 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "subsystem": "scheduler", 00:13:59.863 "config": [ 00:13:59.863 { 00:13:59.863 "method": "framework_set_scheduler", 00:13:59.863 "params": { 00:13:59.863 "name": "static" 00:13:59.863 } 00:13:59.863 } 00:13:59.863 ] 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "subsystem": "nvmf", 00:13:59.863 "config": [ 00:13:59.863 { 00:13:59.863 "method": "nvmf_set_config", 00:13:59.863 "params": { 00:13:59.863 "discovery_filter": "match_any", 00:13:59.863 "admin_cmd_passthru": { 00:13:59.863 "identify_ctrlr": false 00:13:59.863 } 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_set_max_subsystems", 00:13:59.863 "params": { 00:13:59.863 "max_subsystems": 1024 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_set_crdt", 00:13:59.863 "params": { 00:13:59.863 "crdt1": 0, 00:13:59.863 "crdt2": 0, 00:13:59.863 "crdt3": 0 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_create_transport", 00:13:59.863 "params": { 00:13:59.863 "trtype": "TCP", 00:13:59.863 "max_queue_depth": 128, 00:13:59.863 "max_io_qpairs_per_ctrlr": 127, 00:13:59.863 "in_capsule_data_size": 4096, 00:13:59.863 "max_io_size": 131072, 00:13:59.863 "io_unit_size": 131072, 00:13:59.863 "max_aq_depth": 128, 00:13:59.863 "num_shared_buffers": 511, 00:13:59.863 "buf_cache_size": 4294967295, 00:13:59.863 "dif_insert_or_strip": false, 00:13:59.863 "zcopy": false, 00:13:59.863 "c2h_success": false, 00:13:59.863 "sock_priority": 0, 00:13:59.863 "abort_timeout_sec": 1, 00:13:59.863 "ack_timeout": 0, 00:13:59.863 "data_wr_pool_size": 0 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_create_subsystem", 00:13:59.863 "params": { 00:13:59.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.863 "allow_any_host": false, 00:13:59.863 "serial_number": "SPDK00000000000001", 00:13:59.863 "model_number": "SPDK bdev Controller", 00:13:59.863 "max_namespaces": 10, 00:13:59.863 "min_cntlid": 1, 00:13:59.863 "max_cntlid": 65519, 00:13:59.863 "ana_reporting": false 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_subsystem_add_host", 00:13:59.863 "params": { 00:13:59.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.863 "host": "nqn.2016-06.io.spdk:host1", 00:13:59.863 "psk": "/tmp/tmp.jo7FAS4e6c" 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_subsystem_add_ns", 00:13:59.863 "params": { 00:13:59.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.863 "namespace": { 00:13:59.863 "nsid": 1, 00:13:59.863 "bdev_name": "malloc0", 00:13:59.863 "nguid": "5EFFD689D97748AD96AA89E230ADD063", 00:13:59.863 "uuid": "5effd689-d977-48ad-96aa-89e230add063", 00:13:59.863 "no_auto_visible": false 00:13:59.863 } 00:13:59.863 } 00:13:59.863 }, 00:13:59.863 { 00:13:59.863 "method": "nvmf_subsystem_add_listener", 00:13:59.863 "params": { 00:13:59.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.863 "listen_address": { 00:13:59.863 "trtype": "TCP", 00:13:59.863 "adrfam": "IPv4", 00:13:59.863 "traddr": "10.0.0.2", 00:13:59.863 "trsvcid": "4420" 00:13:59.863 }, 00:13:59.863 "secure_channel": true 00:13:59.863 } 00:13:59.863 } 00:13:59.863 ] 00:13:59.863 } 00:13:59.863 ] 00:13:59.863 }' 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85650 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85650 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85650 ']' 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.863 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.863 [2024-07-25 01:57:15.102222] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:13:59.863 [2024-07-25 01:57:15.102313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.123 [2024-07-25 01:57:15.225331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.123 [2024-07-25 01:57:15.235771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.123 [2024-07-25 01:57:15.268357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.123 [2024-07-25 01:57:15.268414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.123 [2024-07-25 01:57:15.268423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.123 [2024-07-25 01:57:15.268430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.123 [2024-07-25 01:57:15.268436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.123 [2024-07-25 01:57:15.268504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.123 [2024-07-25 01:57:15.408802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:00.381 [2024-07-25 01:57:15.454665] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.381 [2024-07-25 01:57:15.470576] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:00.381 [2024-07-25 01:57:15.486607] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.381 [2024-07-25 01:57:15.495052] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85682 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85682 /var/tmp/bdevperf.sock 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85682 ']' 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:00.948 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:00.948 "subsystems": [ 00:14:00.948 { 00:14:00.948 "subsystem": "keyring", 00:14:00.948 "config": [] 00:14:00.948 }, 00:14:00.948 { 00:14:00.948 "subsystem": "iobuf", 00:14:00.948 "config": [ 00:14:00.948 { 00:14:00.949 "method": "iobuf_set_options", 00:14:00.949 "params": { 00:14:00.949 "small_pool_count": 8192, 00:14:00.949 "large_pool_count": 1024, 00:14:00.949 "small_bufsize": 8192, 00:14:00.949 "large_bufsize": 135168 00:14:00.949 } 00:14:00.949 } 00:14:00.949 ] 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "subsystem": "sock", 00:14:00.949 "config": [ 00:14:00.949 { 00:14:00.949 "method": "sock_set_default_impl", 00:14:00.949 "params": { 00:14:00.949 "impl_name": "uring" 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "sock_impl_set_options", 00:14:00.949 "params": { 00:14:00.949 "impl_name": "ssl", 00:14:00.949 "recv_buf_size": 4096, 00:14:00.949 "send_buf_size": 4096, 00:14:00.949 "enable_recv_pipe": true, 00:14:00.949 "enable_quickack": false, 00:14:00.949 "enable_placement_id": 0, 00:14:00.949 "enable_zerocopy_send_server": true, 00:14:00.949 "enable_zerocopy_send_client": false, 00:14:00.949 "zerocopy_threshold": 0, 00:14:00.949 "tls_version": 0, 00:14:00.949 "enable_ktls": false 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "sock_impl_set_options", 00:14:00.949 "params": { 00:14:00.949 "impl_name": "posix", 00:14:00.949 "recv_buf_size": 2097152, 00:14:00.949 "send_buf_size": 2097152, 00:14:00.949 "enable_recv_pipe": true, 00:14:00.949 "enable_quickack": false, 00:14:00.949 "enable_placement_id": 0, 00:14:00.949 "enable_zerocopy_send_server": true, 00:14:00.949 "enable_zerocopy_send_client": false, 00:14:00.949 "zerocopy_threshold": 0, 00:14:00.949 "tls_version": 0, 00:14:00.949 "enable_ktls": false 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "sock_impl_set_options", 00:14:00.949 "params": { 00:14:00.949 "impl_name": "uring", 00:14:00.949 "recv_buf_size": 2097152, 00:14:00.949 "send_buf_size": 2097152, 00:14:00.949 "enable_recv_pipe": true, 00:14:00.949 "enable_quickack": false, 00:14:00.949 "enable_placement_id": 0, 00:14:00.949 "enable_zerocopy_send_server": false, 00:14:00.949 "enable_zerocopy_send_client": false, 00:14:00.949 "zerocopy_threshold": 0, 00:14:00.949 "tls_version": 0, 00:14:00.949 "enable_ktls": false 00:14:00.949 } 00:14:00.949 } 00:14:00.949 ] 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "subsystem": "vmd", 00:14:00.949 "config": [] 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "subsystem": "accel", 00:14:00.949 "config": [ 00:14:00.949 { 00:14:00.949 "method": "accel_set_options", 00:14:00.949 "params": { 00:14:00.949 "small_cache_size": 128, 00:14:00.949 "large_cache_size": 16, 00:14:00.949 "task_count": 2048, 00:14:00.949 "sequence_count": 2048, 00:14:00.949 "buf_count": 2048 00:14:00.949 } 00:14:00.949 } 00:14:00.949 ] 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "subsystem": "bdev", 00:14:00.949 "config": [ 00:14:00.949 { 00:14:00.949 "method": "bdev_set_options", 00:14:00.949 "params": { 00:14:00.949 "bdev_io_pool_size": 65535, 00:14:00.949 "bdev_io_cache_size": 256, 00:14:00.949 "bdev_auto_examine": true, 00:14:00.949 "iobuf_small_cache_size": 128, 00:14:00.949 "iobuf_large_cache_size": 16 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "bdev_raid_set_options", 00:14:00.949 "params": { 00:14:00.949 "process_window_size_kb": 1024, 00:14:00.949 "process_max_bandwidth_mb_sec": 0 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "bdev_iscsi_set_options", 00:14:00.949 "params": { 00:14:00.949 "timeout_sec": 30 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "bdev_nvme_set_options", 00:14:00.949 "params": { 00:14:00.949 "action_on_timeout": "none", 00:14:00.949 "timeout_us": 0, 00:14:00.949 "timeout_admin_us": 0, 00:14:00.949 "keep_alive_timeout_ms": 10000, 00:14:00.949 "arbitration_burst": 0, 00:14:00.949 "low_priority_weight": 0, 00:14:00.949 "medium_priority_weight": 0, 00:14:00.949 "high_priority_weight": 0, 00:14:00.949 "nvme_adminq_poll_period_us": 10000, 00:14:00.949 "nvme_ioq_poll_period_us": 0, 00:14:00.949 "io_queue_requests": 512, 00:14:00.949 "delay_cmd_submit": true, 00:14:00.949 "transport_retry_count": 4, 00:14:00.949 "bdev_retry_count": 3, 00:14:00.949 "transport_ack_timeout": 0, 00:14:00.949 "ctrlr_loss_timeout_sec": 0, 00:14:00.949 "reconnect_delay_sec": 0, 00:14:00.949 "fast_io_fail_timeout_sec": 0, 00:14:00.949 "disable_auto_failback": false, 00:14:00.949 "generate_uuids": false, 00:14:00.949 "transport_tos": 0, 00:14:00.949 "nvme_error_stat": false, 00:14:00.949 "rdma_srq_size": 0, 00:14:00.949 "io_path_stat": false, 00:14:00.949 "allow_accel_sequence": false, 00:14:00.949 "rdma_max_cq_size": 0, 00:14:00.949 "rdma_cm_event_timeout_ms": 0, 00:14:00.949 "dhchap_digests": [ 00:14:00.949 "sha256", 00:14:00.949 "sha384", 00:14:00.949 "sha512" 00:14:00.949 ], 00:14:00.949 "dhchap_dhgroups": [ 00:14:00.949 "null", 00:14:00.949 "ffdhe2048", 00:14:00.949 "ffdhe3072", 00:14:00.949 "ffdhe4096", 00:14:00.949 "ffdhe6144", 00:14:00.949 "ffdhe8192" 00:14:00.949 ] 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "bdev_nvme_attach_controller", 00:14:00.949 "params": { 00:14:00.949 "name": "TLSTEST", 00:14:00.949 "trtype": "TCP", 00:14:00.949 "adrfam": "IPv4", 00:14:00.949 "traddr": "10.0.0.2", 00:14:00.949 "trsvcid": "4420", 00:14:00.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.949 "prchk_reftag": false, 00:14:00.949 "prchk_guard": false, 00:14:00.949 "ctrlr_loss_timeout_sec": 0, 00:14:00.949 "reconnect_delay_sec": 0, 00:14:00.949 "fast_io_fail_timeout_sec": 0, 00:14:00.949 "psk": "/tmp/tmp.jo7FAS4e6c", 00:14:00.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.949 "hdgst": false, 00:14:00.949 "ddgst": false 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "bdev_nvme_set_hotplug", 00:14:00.949 "params": { 00:14:00.949 "period_us": 100000, 00:14:00.949 "enable": false 00:14:00.949 } 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "method": "bdev_wait_for_examine" 00:14:00.949 } 00:14:00.949 ] 00:14:00.949 }, 00:14:00.949 { 00:14:00.949 "subsystem": "nbd", 00:14:00.949 "config": [] 00:14:00.949 } 00:14:00.949 ] 00:14:00.949 }' 00:14:00.949 [2024-07-25 01:57:16.103394] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:00.949 [2024-07-25 01:57:16.103487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85682 ] 00:14:00.949 [2024-07-25 01:57:16.225461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.949 [2024-07-25 01:57:16.244344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.208 [2024-07-25 01:57:16.287282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.208 [2024-07-25 01:57:16.402187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.208 [2024-07-25 01:57:16.424636] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.208 [2024-07-25 01:57:16.424750] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:01.774 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.774 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:01.774 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:02.042 Running I/O for 10 seconds... 00:14:12.025 00:14:12.025 Latency(us) 00:14:12.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:12.025 Verification LBA range: start 0x0 length 0x2000 00:14:12.025 TLSTESTn1 : 10.02 4194.61 16.39 0.00 0.00 30447.27 5421.61 24307.90 00:14:12.025 =================================================================================================================== 00:14:12.025 Total : 4194.61 16.39 0.00 0.00 30447.27 5421.61 24307.90 00:14:12.025 0 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 85682 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85682 ']' 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85682 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85682 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:12.025 killing process with pid 85682 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85682' 00:14:12.025 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.025 00:14:12.025 Latency(us) 00:14:12.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.025 =================================================================================================================== 00:14:12.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85682 00:14:12.025 [2024-07-25 01:57:27.167798] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85682 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 85650 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85650 ']' 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85650 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.025 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85650 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:12.284 killing process with pid 85650 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85650' 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85650 00:14:12.284 [2024-07-25 01:57:27.341776] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85650 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85816 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85816 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85816 ']' 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.284 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.284 [2024-07-25 01:57:27.545196] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:12.284 [2024-07-25 01:57:27.545316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.542 [2024-07-25 01:57:27.668601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:12.542 [2024-07-25 01:57:27.688639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.542 [2024-07-25 01:57:27.730410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.542 [2024-07-25 01:57:27.730484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.542 [2024-07-25 01:57:27.730509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.542 [2024-07-25 01:57:27.730519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.542 [2024-07-25 01:57:27.730528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.542 [2024-07-25 01:57:27.730558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.542 [2024-07-25 01:57:27.765456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.jo7FAS4e6c 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jo7FAS4e6c 00:14:13.475 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.735 [2024-07-25 01:57:28.780172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.735 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.735 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:14.001 [2024-07-25 01:57:29.232295] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:14.001 [2024-07-25 01:57:29.232562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.001 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:14.259 malloc0 00:14:14.259 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.516 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jo7FAS4e6c 00:14:14.774 [2024-07-25 01:57:29.970197] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85865 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85865 /var/tmp/bdevperf.sock 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85865 ']' 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.774 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.774 [2024-07-25 01:57:30.035885] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:14.774 [2024-07-25 01:57:30.036030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85865 ] 00:14:15.031 [2024-07-25 01:57:30.153747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:15.031 [2024-07-25 01:57:30.174926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.031 [2024-07-25 01:57:30.217071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.031 [2024-07-25 01:57:30.250215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:15.963 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.963 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.963 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jo7FAS4e6c 00:14:15.963 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:16.220 [2024-07-25 01:57:31.370492] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:16.220 nvme0n1 00:14:16.220 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.477 Running I/O for 1 seconds... 00:14:17.411 00:14:17.411 Latency(us) 00:14:17.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.411 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:17.411 Verification LBA range: start 0x0 length 0x2000 00:14:17.411 nvme0n1 : 1.02 4304.92 16.82 0.00 0.00 29340.17 4230.05 18350.08 00:14:17.411 =================================================================================================================== 00:14:17.411 Total : 4304.92 16.82 0.00 0.00 29340.17 4230.05 18350.08 00:14:17.411 0 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 85865 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85865 ']' 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85865 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85865 00:14:17.411 killing process with pid 85865 00:14:17.411 Received shutdown signal, test time was about 1.000000 seconds 00:14:17.411 00:14:17.411 Latency(us) 00:14:17.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.411 =================================================================================================================== 00:14:17.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85865' 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85865 00:14:17.411 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85865 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 85816 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85816 ']' 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85816 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85816 00:14:17.670 killing process with pid 85816 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85816' 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85816 00:14:17.670 [2024-07-25 01:57:32.777730] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85816 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85916 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85916 00:14:17.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85916 ']' 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.670 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.928 [2024-07-25 01:57:32.990002] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:17.928 [2024-07-25 01:57:32.990122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.928 [2024-07-25 01:57:33.113656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:17.928 [2024-07-25 01:57:33.133383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.928 [2024-07-25 01:57:33.169301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.928 [2024-07-25 01:57:33.169382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.928 [2024-07-25 01:57:33.169407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.928 [2024-07-25 01:57:33.169416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.929 [2024-07-25 01:57:33.169422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.929 [2024-07-25 01:57:33.169446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.929 [2024-07-25 01:57:33.197797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.870 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.870 [2024-07-25 01:57:33.957678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.870 malloc0 00:14:18.870 [2024-07-25 01:57:33.984511] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:18.870 [2024-07-25 01:57:33.984738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85948 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85948 /var/tmp/bdevperf.sock 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85948 ']' 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.870 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.870 [2024-07-25 01:57:34.058831] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:18.870 [2024-07-25 01:57:34.058942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85948 ] 00:14:19.128 [2024-07-25 01:57:34.175277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:19.128 [2024-07-25 01:57:34.193810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.128 [2024-07-25 01:57:34.233860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.128 [2024-07-25 01:57:34.266629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.128 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.128 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:19.128 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jo7FAS4e6c 00:14:19.385 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:19.641 [2024-07-25 01:57:34.710885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.641 nvme0n1 00:14:19.641 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.642 Running I/O for 1 seconds... 00:14:21.013 00:14:21.013 Latency(us) 00:14:21.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.013 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:21.013 Verification LBA range: start 0x0 length 0x2000 00:14:21.013 nvme0n1 : 1.02 4028.49 15.74 0.00 0.00 31440.27 6762.12 20494.89 00:14:21.013 =================================================================================================================== 00:14:21.013 Total : 4028.49 15.74 0.00 0.00 31440.27 6762.12 20494.89 00:14:21.013 0 00:14:21.013 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:21.013 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.013 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.013 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.013 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:21.013 "subsystems": [ 00:14:21.013 { 00:14:21.013 "subsystem": "keyring", 00:14:21.013 "config": [ 00:14:21.013 { 00:14:21.013 "method": "keyring_file_add_key", 00:14:21.013 "params": { 00:14:21.013 "name": "key0", 00:14:21.013 "path": "/tmp/tmp.jo7FAS4e6c" 00:14:21.013 } 00:14:21.013 } 00:14:21.013 ] 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "subsystem": "iobuf", 00:14:21.013 "config": [ 00:14:21.013 { 00:14:21.013 "method": "iobuf_set_options", 00:14:21.013 "params": { 00:14:21.013 "small_pool_count": 8192, 00:14:21.013 "large_pool_count": 1024, 00:14:21.013 "small_bufsize": 8192, 00:14:21.013 "large_bufsize": 135168 00:14:21.013 } 00:14:21.013 } 00:14:21.013 ] 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "subsystem": "sock", 00:14:21.013 "config": [ 00:14:21.013 { 00:14:21.013 "method": "sock_set_default_impl", 00:14:21.013 "params": { 00:14:21.013 "impl_name": "uring" 00:14:21.013 } 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "method": "sock_impl_set_options", 00:14:21.013 "params": { 00:14:21.013 "impl_name": "ssl", 00:14:21.013 "recv_buf_size": 4096, 00:14:21.013 "send_buf_size": 4096, 00:14:21.013 "enable_recv_pipe": true, 00:14:21.013 "enable_quickack": false, 00:14:21.013 "enable_placement_id": 0, 00:14:21.013 "enable_zerocopy_send_server": true, 00:14:21.013 "enable_zerocopy_send_client": false, 00:14:21.013 "zerocopy_threshold": 0, 00:14:21.013 "tls_version": 0, 00:14:21.013 "enable_ktls": false 00:14:21.013 } 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "method": "sock_impl_set_options", 00:14:21.013 "params": { 00:14:21.013 "impl_name": "posix", 00:14:21.013 "recv_buf_size": 2097152, 00:14:21.013 "send_buf_size": 2097152, 00:14:21.013 "enable_recv_pipe": true, 00:14:21.013 "enable_quickack": false, 00:14:21.013 "enable_placement_id": 0, 00:14:21.013 "enable_zerocopy_send_server": true, 00:14:21.013 "enable_zerocopy_send_client": false, 00:14:21.013 "zerocopy_threshold": 0, 00:14:21.013 "tls_version": 0, 00:14:21.013 "enable_ktls": false 00:14:21.013 } 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "method": "sock_impl_set_options", 00:14:21.013 "params": { 00:14:21.013 "impl_name": "uring", 00:14:21.013 "recv_buf_size": 2097152, 00:14:21.013 "send_buf_size": 2097152, 00:14:21.013 "enable_recv_pipe": true, 00:14:21.013 "enable_quickack": false, 00:14:21.013 "enable_placement_id": 0, 00:14:21.013 "enable_zerocopy_send_server": false, 00:14:21.013 "enable_zerocopy_send_client": false, 00:14:21.013 "zerocopy_threshold": 0, 00:14:21.013 "tls_version": 0, 00:14:21.013 "enable_ktls": false 00:14:21.013 } 00:14:21.013 } 00:14:21.013 ] 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "subsystem": "vmd", 00:14:21.013 "config": [] 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "subsystem": "accel", 00:14:21.013 "config": [ 00:14:21.013 { 00:14:21.013 "method": "accel_set_options", 00:14:21.013 "params": { 00:14:21.013 "small_cache_size": 128, 00:14:21.013 "large_cache_size": 16, 00:14:21.013 "task_count": 2048, 00:14:21.013 "sequence_count": 2048, 00:14:21.013 "buf_count": 2048 00:14:21.013 } 00:14:21.013 } 00:14:21.013 ] 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "subsystem": "bdev", 00:14:21.013 "config": [ 00:14:21.013 { 00:14:21.013 "method": "bdev_set_options", 00:14:21.013 "params": { 00:14:21.013 "bdev_io_pool_size": 65535, 00:14:21.013 "bdev_io_cache_size": 256, 00:14:21.013 "bdev_auto_examine": true, 00:14:21.013 "iobuf_small_cache_size": 128, 00:14:21.013 "iobuf_large_cache_size": 16 00:14:21.013 } 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "method": "bdev_raid_set_options", 00:14:21.013 "params": { 00:14:21.013 "process_window_size_kb": 1024, 00:14:21.013 "process_max_bandwidth_mb_sec": 0 00:14:21.013 } 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "method": "bdev_iscsi_set_options", 00:14:21.013 "params": { 00:14:21.013 "timeout_sec": 30 00:14:21.013 } 00:14:21.013 }, 00:14:21.013 { 00:14:21.013 "method": "bdev_nvme_set_options", 00:14:21.013 "params": { 00:14:21.013 "action_on_timeout": "none", 00:14:21.013 "timeout_us": 0, 00:14:21.013 "timeout_admin_us": 0, 00:14:21.014 "keep_alive_timeout_ms": 10000, 00:14:21.014 "arbitration_burst": 0, 00:14:21.014 "low_priority_weight": 0, 00:14:21.014 "medium_priority_weight": 0, 00:14:21.014 "high_priority_weight": 0, 00:14:21.014 "nvme_adminq_poll_period_us": 10000, 00:14:21.014 "nvme_ioq_poll_period_us": 0, 00:14:21.014 "io_queue_requests": 0, 00:14:21.014 "delay_cmd_submit": true, 00:14:21.014 "transport_retry_count": 4, 00:14:21.014 "bdev_retry_count": 3, 00:14:21.014 "transport_ack_timeout": 0, 00:14:21.014 "ctrlr_loss_timeout_sec": 0, 00:14:21.014 "reconnect_delay_sec": 0, 00:14:21.014 "fast_io_fail_timeout_sec": 0, 00:14:21.014 "disable_auto_failback": false, 00:14:21.014 "generate_uuids": false, 00:14:21.014 "transport_tos": 0, 00:14:21.014 "nvme_error_stat": false, 00:14:21.014 "rdma_srq_size": 0, 00:14:21.014 "io_path_stat": false, 00:14:21.014 "allow_accel_sequence": false, 00:14:21.014 "rdma_max_cq_size": 0, 00:14:21.014 "rdma_cm_event_timeout_ms": 0, 00:14:21.014 "dhchap_digests": [ 00:14:21.014 "sha256", 00:14:21.014 "sha384", 00:14:21.014 "sha512" 00:14:21.014 ], 00:14:21.014 "dhchap_dhgroups": [ 00:14:21.014 "null", 00:14:21.014 "ffdhe2048", 00:14:21.014 "ffdhe3072", 00:14:21.014 "ffdhe4096", 00:14:21.014 "ffdhe6144", 00:14:21.014 "ffdhe8192" 00:14:21.014 ] 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "bdev_nvme_set_hotplug", 00:14:21.014 "params": { 00:14:21.014 "period_us": 100000, 00:14:21.014 "enable": false 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "bdev_malloc_create", 00:14:21.014 "params": { 00:14:21.014 "name": "malloc0", 00:14:21.014 "num_blocks": 8192, 00:14:21.014 "block_size": 4096, 00:14:21.014 "physical_block_size": 4096, 00:14:21.014 "uuid": "18cde898-2c08-4f0c-a99e-2eae0fa3ae74", 00:14:21.014 "optimal_io_boundary": 0, 00:14:21.014 "md_size": 0, 00:14:21.014 "dif_type": 0, 00:14:21.014 "dif_is_head_of_md": false, 00:14:21.014 "dif_pi_format": 0 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "bdev_wait_for_examine" 00:14:21.014 } 00:14:21.014 ] 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "subsystem": "nbd", 00:14:21.014 "config": [] 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "subsystem": "scheduler", 00:14:21.014 "config": [ 00:14:21.014 { 00:14:21.014 "method": "framework_set_scheduler", 00:14:21.014 "params": { 00:14:21.014 "name": "static" 00:14:21.014 } 00:14:21.014 } 00:14:21.014 ] 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "subsystem": "nvmf", 00:14:21.014 "config": [ 00:14:21.014 { 00:14:21.014 "method": "nvmf_set_config", 00:14:21.014 "params": { 00:14:21.014 "discovery_filter": "match_any", 00:14:21.014 "admin_cmd_passthru": { 00:14:21.014 "identify_ctrlr": false 00:14:21.014 } 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_set_max_subsystems", 00:14:21.014 "params": { 00:14:21.014 "max_subsystems": 1024 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_set_crdt", 00:14:21.014 "params": { 00:14:21.014 "crdt1": 0, 00:14:21.014 "crdt2": 0, 00:14:21.014 "crdt3": 0 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_create_transport", 00:14:21.014 "params": { 00:14:21.014 "trtype": "TCP", 00:14:21.014 "max_queue_depth": 128, 00:14:21.014 "max_io_qpairs_per_ctrlr": 127, 00:14:21.014 "in_capsule_data_size": 4096, 00:14:21.014 "max_io_size": 131072, 00:14:21.014 "io_unit_size": 131072, 00:14:21.014 "max_aq_depth": 128, 00:14:21.014 "num_shared_buffers": 511, 00:14:21.014 "buf_cache_size": 4294967295, 00:14:21.014 "dif_insert_or_strip": false, 00:14:21.014 "zcopy": false, 00:14:21.014 "c2h_success": false, 00:14:21.014 "sock_priority": 0, 00:14:21.014 "abort_timeout_sec": 1, 00:14:21.014 "ack_timeout": 0, 00:14:21.014 "data_wr_pool_size": 0 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_create_subsystem", 00:14:21.014 "params": { 00:14:21.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.014 "allow_any_host": false, 00:14:21.014 "serial_number": "00000000000000000000", 00:14:21.014 "model_number": "SPDK bdev Controller", 00:14:21.014 "max_namespaces": 32, 00:14:21.014 "min_cntlid": 1, 00:14:21.014 "max_cntlid": 65519, 00:14:21.014 "ana_reporting": false 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_subsystem_add_host", 00:14:21.014 "params": { 00:14:21.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.014 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.014 "psk": "key0" 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_subsystem_add_ns", 00:14:21.014 "params": { 00:14:21.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.014 "namespace": { 00:14:21.014 "nsid": 1, 00:14:21.014 "bdev_name": "malloc0", 00:14:21.014 "nguid": "18CDE8982C084F0CA99E2EAE0FA3AE74", 00:14:21.014 "uuid": "18cde898-2c08-4f0c-a99e-2eae0fa3ae74", 00:14:21.014 "no_auto_visible": false 00:14:21.014 } 00:14:21.014 } 00:14:21.014 }, 00:14:21.014 { 00:14:21.014 "method": "nvmf_subsystem_add_listener", 00:14:21.014 "params": { 00:14:21.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.014 "listen_address": { 00:14:21.014 "trtype": "TCP", 00:14:21.014 "adrfam": "IPv4", 00:14:21.014 "traddr": "10.0.0.2", 00:14:21.014 "trsvcid": "4420" 00:14:21.014 }, 00:14:21.014 "secure_channel": false, 00:14:21.014 "sock_impl": "ssl" 00:14:21.014 } 00:14:21.014 } 00:14:21.014 ] 00:14:21.014 } 00:14:21.014 ] 00:14:21.014 }' 00:14:21.014 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:21.272 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:21.272 "subsystems": [ 00:14:21.272 { 00:14:21.272 "subsystem": "keyring", 00:14:21.272 "config": [ 00:14:21.272 { 00:14:21.272 "method": "keyring_file_add_key", 00:14:21.272 "params": { 00:14:21.272 "name": "key0", 00:14:21.272 "path": "/tmp/tmp.jo7FAS4e6c" 00:14:21.272 } 00:14:21.272 } 00:14:21.272 ] 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "subsystem": "iobuf", 00:14:21.272 "config": [ 00:14:21.272 { 00:14:21.272 "method": "iobuf_set_options", 00:14:21.272 "params": { 00:14:21.272 "small_pool_count": 8192, 00:14:21.272 "large_pool_count": 1024, 00:14:21.272 "small_bufsize": 8192, 00:14:21.272 "large_bufsize": 135168 00:14:21.272 } 00:14:21.272 } 00:14:21.272 ] 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "subsystem": "sock", 00:14:21.272 "config": [ 00:14:21.272 { 00:14:21.272 "method": "sock_set_default_impl", 00:14:21.272 "params": { 00:14:21.272 "impl_name": "uring" 00:14:21.272 } 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "method": "sock_impl_set_options", 00:14:21.272 "params": { 00:14:21.272 "impl_name": "ssl", 00:14:21.272 "recv_buf_size": 4096, 00:14:21.272 "send_buf_size": 4096, 00:14:21.272 "enable_recv_pipe": true, 00:14:21.272 "enable_quickack": false, 00:14:21.272 "enable_placement_id": 0, 00:14:21.272 "enable_zerocopy_send_server": true, 00:14:21.272 "enable_zerocopy_send_client": false, 00:14:21.272 "zerocopy_threshold": 0, 00:14:21.272 "tls_version": 0, 00:14:21.272 "enable_ktls": false 00:14:21.272 } 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "method": "sock_impl_set_options", 00:14:21.272 "params": { 00:14:21.272 "impl_name": "posix", 00:14:21.272 "recv_buf_size": 2097152, 00:14:21.272 "send_buf_size": 2097152, 00:14:21.272 "enable_recv_pipe": true, 00:14:21.272 "enable_quickack": false, 00:14:21.272 "enable_placement_id": 0, 00:14:21.272 "enable_zerocopy_send_server": true, 00:14:21.272 "enable_zerocopy_send_client": false, 00:14:21.272 "zerocopy_threshold": 0, 00:14:21.272 "tls_version": 0, 00:14:21.272 "enable_ktls": false 00:14:21.272 } 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "method": "sock_impl_set_options", 00:14:21.272 "params": { 00:14:21.272 "impl_name": "uring", 00:14:21.272 "recv_buf_size": 2097152, 00:14:21.272 "send_buf_size": 2097152, 00:14:21.272 "enable_recv_pipe": true, 00:14:21.272 "enable_quickack": false, 00:14:21.272 "enable_placement_id": 0, 00:14:21.272 "enable_zerocopy_send_server": false, 00:14:21.272 "enable_zerocopy_send_client": false, 00:14:21.272 "zerocopy_threshold": 0, 00:14:21.272 "tls_version": 0, 00:14:21.272 "enable_ktls": false 00:14:21.272 } 00:14:21.272 } 00:14:21.272 ] 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "subsystem": "vmd", 00:14:21.272 "config": [] 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "subsystem": "accel", 00:14:21.272 "config": [ 00:14:21.272 { 00:14:21.272 "method": "accel_set_options", 00:14:21.272 "params": { 00:14:21.272 "small_cache_size": 128, 00:14:21.272 "large_cache_size": 16, 00:14:21.272 "task_count": 2048, 00:14:21.272 "sequence_count": 2048, 00:14:21.272 "buf_count": 2048 00:14:21.272 } 00:14:21.272 } 00:14:21.272 ] 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "subsystem": "bdev", 00:14:21.272 "config": [ 00:14:21.272 { 00:14:21.272 "method": "bdev_set_options", 00:14:21.272 "params": { 00:14:21.272 "bdev_io_pool_size": 65535, 00:14:21.272 "bdev_io_cache_size": 256, 00:14:21.272 "bdev_auto_examine": true, 00:14:21.272 "iobuf_small_cache_size": 128, 00:14:21.272 "iobuf_large_cache_size": 16 00:14:21.272 } 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "method": "bdev_raid_set_options", 00:14:21.272 "params": { 00:14:21.272 "process_window_size_kb": 1024, 00:14:21.272 "process_max_bandwidth_mb_sec": 0 00:14:21.272 } 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "method": "bdev_iscsi_set_options", 00:14:21.272 "params": { 00:14:21.272 "timeout_sec": 30 00:14:21.272 } 00:14:21.272 }, 00:14:21.272 { 00:14:21.272 "method": "bdev_nvme_set_options", 00:14:21.272 "params": { 00:14:21.272 "action_on_timeout": "none", 00:14:21.272 "timeout_us": 0, 00:14:21.272 "timeout_admin_us": 0, 00:14:21.272 "keep_alive_timeout_ms": 10000, 00:14:21.272 "arbitration_burst": 0, 00:14:21.272 "low_priority_weight": 0, 00:14:21.272 "medium_priority_weight": 0, 00:14:21.272 "high_priority_weight": 0, 00:14:21.272 "nvme_adminq_poll_period_us": 10000, 00:14:21.272 "nvme_ioq_poll_period_us": 0, 00:14:21.272 "io_queue_requests": 512, 00:14:21.272 "delay_cmd_submit": true, 00:14:21.272 "transport_retry_count": 4, 00:14:21.272 "bdev_retry_count": 3, 00:14:21.272 "transport_ack_timeout": 0, 00:14:21.273 "ctrlr_loss_timeout_sec": 0, 00:14:21.273 "reconnect_delay_sec": 0, 00:14:21.273 "fast_io_fail_timeout_sec": 0, 00:14:21.273 "disable_auto_failback": false, 00:14:21.273 "generate_uuids": false, 00:14:21.273 "transport_tos": 0, 00:14:21.273 "nvme_error_stat": false, 00:14:21.273 "rdma_srq_size": 0, 00:14:21.273 "io_path_stat": false, 00:14:21.273 "allow_accel_sequence": false, 00:14:21.273 "rdma_max_cq_size": 0, 00:14:21.273 "rdma_cm_event_timeout_ms": 0, 00:14:21.273 "dhchap_digests": [ 00:14:21.273 "sha256", 00:14:21.273 "sha384", 00:14:21.273 "sha512" 00:14:21.273 ], 00:14:21.273 "dhchap_dhgroups": [ 00:14:21.273 "null", 00:14:21.273 "ffdhe2048", 00:14:21.273 "ffdhe3072", 00:14:21.273 "ffdhe4096", 00:14:21.273 "ffdhe6144", 00:14:21.273 "ffdhe8192" 00:14:21.273 ] 00:14:21.273 } 00:14:21.273 }, 00:14:21.273 { 00:14:21.273 "method": "bdev_nvme_attach_controller", 00:14:21.273 "params": { 00:14:21.273 "name": "nvme0", 00:14:21.273 "trtype": "TCP", 00:14:21.273 "adrfam": "IPv4", 00:14:21.273 "traddr": "10.0.0.2", 00:14:21.273 "trsvcid": "4420", 00:14:21.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.273 "prchk_reftag": false, 00:14:21.273 "prchk_guard": false, 00:14:21.273 "ctrlr_loss_timeout_sec": 0, 00:14:21.273 "reconnect_delay_sec": 0, 00:14:21.273 "fast_io_fail_timeout_sec": 0, 00:14:21.273 "psk": "key0", 00:14:21.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.273 "hdgst": false, 00:14:21.273 "ddgst": false 00:14:21.273 } 00:14:21.273 }, 00:14:21.273 { 00:14:21.273 "method": "bdev_nvme_set_hotplug", 00:14:21.273 "params": { 00:14:21.273 "period_us": 100000, 00:14:21.273 "enable": false 00:14:21.273 } 00:14:21.273 }, 00:14:21.273 { 00:14:21.273 "method": "bdev_enable_histogram", 00:14:21.273 "params": { 00:14:21.273 "name": "nvme0n1", 00:14:21.273 "enable": true 00:14:21.273 } 00:14:21.273 }, 00:14:21.273 { 00:14:21.273 "method": "bdev_wait_for_examine" 00:14:21.273 } 00:14:21.273 ] 00:14:21.273 }, 00:14:21.273 { 00:14:21.273 "subsystem": "nbd", 00:14:21.273 "config": [] 00:14:21.273 } 00:14:21.273 ] 00:14:21.273 }' 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 85948 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85948 ']' 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85948 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85948 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:21.273 killing process with pid 85948 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85948' 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85948 00:14:21.273 Received shutdown signal, test time was about 1.000000 seconds 00:14:21.273 00:14:21.273 Latency(us) 00:14:21.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.273 =================================================================================================================== 00:14:21.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85948 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 85916 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85916 ']' 00:14:21.273 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85916 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85916 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:21.532 killing process with pid 85916 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85916' 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85916 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85916 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.532 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:21.532 "subsystems": [ 00:14:21.532 { 00:14:21.532 "subsystem": "keyring", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "keyring_file_add_key", 00:14:21.532 "params": { 00:14:21.532 "name": "key0", 00:14:21.532 "path": "/tmp/tmp.jo7FAS4e6c" 00:14:21.532 } 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "iobuf", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "iobuf_set_options", 00:14:21.532 "params": { 00:14:21.532 "small_pool_count": 8192, 00:14:21.532 "large_pool_count": 1024, 00:14:21.532 "small_bufsize": 8192, 00:14:21.532 "large_bufsize": 135168 00:14:21.532 } 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "sock", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "sock_set_default_impl", 00:14:21.532 "params": { 00:14:21.532 "impl_name": "uring" 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "sock_impl_set_options", 00:14:21.532 "params": { 00:14:21.532 "impl_name": "ssl", 00:14:21.532 "recv_buf_size": 4096, 00:14:21.532 "send_buf_size": 4096, 00:14:21.532 "enable_recv_pipe": true, 00:14:21.532 "enable_quickack": false, 00:14:21.532 "enable_placement_id": 0, 00:14:21.532 "enable_zerocopy_send_server": true, 00:14:21.532 "enable_zerocopy_send_client": false, 00:14:21.532 "zerocopy_threshold": 0, 00:14:21.532 "tls_version": 0, 00:14:21.532 "enable_ktls": false 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "sock_impl_set_options", 00:14:21.532 "params": { 00:14:21.532 "impl_name": "posix", 00:14:21.532 "recv_buf_size": 2097152, 00:14:21.532 "send_buf_size": 2097152, 00:14:21.532 "enable_recv_pipe": true, 00:14:21.532 "enable_quickack": false, 00:14:21.532 "enable_placement_id": 0, 00:14:21.532 "enable_zerocopy_send_server": true, 00:14:21.532 "enable_zerocopy_send_client": false, 00:14:21.532 "zerocopy_threshold": 0, 00:14:21.532 "tls_version": 0, 00:14:21.532 "enable_ktls": false 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "sock_impl_set_options", 00:14:21.532 "params": { 00:14:21.532 "impl_name": "uring", 00:14:21.532 "recv_buf_size": 2097152, 00:14:21.532 "send_buf_size": 2097152, 00:14:21.532 "enable_recv_pipe": true, 00:14:21.532 "enable_quickack": false, 00:14:21.532 "enable_placement_id": 0, 00:14:21.532 "enable_zerocopy_send_server": false, 00:14:21.532 "enable_zerocopy_send_client": false, 00:14:21.532 "zerocopy_threshold": 0, 00:14:21.532 "tls_version": 0, 00:14:21.532 "enable_ktls": false 00:14:21.532 } 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "vmd", 00:14:21.532 "config": [] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "accel", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "accel_set_options", 00:14:21.532 "params": { 00:14:21.532 "small_cache_size": 128, 00:14:21.532 "large_cache_size": 16, 00:14:21.532 "task_count": 2048, 00:14:21.532 "sequence_count": 2048, 00:14:21.532 "buf_count": 2048 00:14:21.532 } 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "bdev", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "bdev_set_options", 00:14:21.532 "params": { 00:14:21.532 "bdev_io_pool_size": 65535, 00:14:21.532 "bdev_io_cache_size": 256, 00:14:21.532 "bdev_auto_examine": true, 00:14:21.532 "iobuf_small_cache_size": 128, 00:14:21.532 "iobuf_large_cache_size": 16 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "bdev_raid_set_options", 00:14:21.532 "params": { 00:14:21.532 "process_window_size_kb": 1024, 00:14:21.532 "process_max_bandwidth_mb_sec": 0 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "bdev_iscsi_set_options", 00:14:21.532 "params": { 00:14:21.532 "timeout_sec": 30 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "bdev_nvme_set_options", 00:14:21.532 "params": { 00:14:21.532 "action_on_timeout": "none", 00:14:21.532 "timeout_us": 0, 00:14:21.532 "timeout_admin_us": 0, 00:14:21.532 "keep_alive_timeout_ms": 10000, 00:14:21.532 "arbitration_burst": 0, 00:14:21.532 "low_priority_weight": 0, 00:14:21.532 "medium_priority_weight": 0, 00:14:21.532 "high_priority_weight": 0, 00:14:21.532 "nvme_adminq_poll_period_us": 10000, 00:14:21.532 "nvme_ioq_poll_period_us": 0, 00:14:21.532 "io_queue_requests": 0, 00:14:21.532 "delay_cmd_submit": true, 00:14:21.532 "transport_retry_count": 4, 00:14:21.532 "bdev_retry_count": 3, 00:14:21.532 "transport_ack_timeout": 0, 00:14:21.532 "ctrlr_loss_timeout_sec": 0, 00:14:21.532 "reconnect_delay_sec": 0, 00:14:21.532 "fast_io_fail_timeout_sec": 0, 00:14:21.532 "disable_auto_failback": false, 00:14:21.532 "generate_uuids": false, 00:14:21.532 "transport_tos": 0, 00:14:21.532 "nvme_error_stat": false, 00:14:21.532 "rdma_srq_size": 0, 00:14:21.532 "io_path_stat": false, 00:14:21.532 "allow_accel_sequence": false, 00:14:21.532 "rdma_max_cq_size": 0, 00:14:21.532 "rdma_cm_event_timeout_ms": 0, 00:14:21.532 "dhchap_digests": [ 00:14:21.532 "sha256", 00:14:21.532 "sha384", 00:14:21.532 "sha512" 00:14:21.532 ], 00:14:21.532 "dhchap_dhgroups": [ 00:14:21.532 "null", 00:14:21.532 "ffdhe2048", 00:14:21.532 "ffdhe3072", 00:14:21.532 "ffdhe4096", 00:14:21.532 "ffdhe6144", 00:14:21.532 "ffdhe8192" 00:14:21.532 ] 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "bdev_nvme_set_hotplug", 00:14:21.532 "params": { 00:14:21.532 "period_us": 100000, 00:14:21.532 "enable": false 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "bdev_malloc_create", 00:14:21.532 "params": { 00:14:21.532 "name": "malloc0", 00:14:21.532 "num_blocks": 8192, 00:14:21.532 "block_size": 4096, 00:14:21.532 "physical_block_size": 4096, 00:14:21.532 "uuid": "18cde898-2c08-4f0c-a99e-2eae0fa3ae74", 00:14:21.532 "optimal_io_boundary": 0, 00:14:21.532 "md_size": 0, 00:14:21.532 "dif_type": 0, 00:14:21.532 "dif_is_head_of_md": false, 00:14:21.532 "dif_pi_format": 0 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "bdev_wait_for_examine" 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "nbd", 00:14:21.532 "config": [] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "scheduler", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "framework_set_scheduler", 00:14:21.532 "params": { 00:14:21.532 "name": "static" 00:14:21.532 } 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "subsystem": "nvmf", 00:14:21.532 "config": [ 00:14:21.532 { 00:14:21.532 "method": "nvmf_set_config", 00:14:21.532 "params": { 00:14:21.532 "discovery_filter": "match_any", 00:14:21.532 "admin_cmd_passthru": { 00:14:21.532 "identify_ctrlr": false 00:14:21.532 } 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "nvmf_set_max_subsystems", 00:14:21.532 "params": { 00:14:21.532 "max_subsystems": 1024 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "nvmf_set_crdt", 00:14:21.532 "params": { 00:14:21.532 "crdt1": 0, 00:14:21.532 "crdt2": 0, 00:14:21.532 "crdt3": 0 00:14:21.532 } 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "method": "nvmf_create_transport", 00:14:21.533 "params": { 00:14:21.533 "trtype": "TCP", 00:14:21.533 "max_queue_depth": 128, 00:14:21.533 "max_io_qpairs_per_ctrlr": 127, 00:14:21.533 "in_capsule_data_size": 4096, 00:14:21.533 "max_io_size": 131072, 00:14:21.533 "io_unit_size": 131072, 00:14:21.533 "max_aq_depth": 128, 00:14:21.533 "num_shared_buffers": 511, 00:14:21.533 "buf_cache_size": 4294967295, 00:14:21.533 "dif_insert_or_strip": false, 00:14:21.533 "zcopy": false, 00:14:21.533 "c2h_success": false, 00:14:21.533 "sock_priority": 0, 00:14:21.533 "abort_timeout_sec": 1, 00:14:21.533 "ack_timeout": 0, 00:14:21.533 "data_wr_pool_size": 0 00:14:21.533 } 00:14:21.533 }, 00:14:21.533 { 00:14:21.533 "method": "nvmf_create_subsystem", 00:14:21.533 "params": { 00:14:21.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.533 "allow_any_host": false, 00:14:21.533 "serial_number": "00000000000000000000", 00:14:21.533 "model_number": "SPDK bdev Controller", 00:14:21.533 "max_namespaces": 32, 00:14:21.533 "min_cntlid": 1, 00:14:21.533 "max_cntlid": 65519, 00:14:21.533 "ana_reporting": false 00:14:21.533 } 00:14:21.533 }, 00:14:21.533 { 00:14:21.533 "method": "nvmf_subsystem_add_host", 00:14:21.533 "params": { 00:14:21.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.533 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.533 "psk": "key0" 00:14:21.533 } 00:14:21.533 }, 00:14:21.533 { 00:14:21.533 "method": "nvmf_subsystem_add_ns", 00:14:21.533 "params": { 00:14:21.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.533 "namespace": { 00:14:21.533 "nsid": 1, 00:14:21.533 "bdev_name": "malloc0", 00:14:21.533 "nguid": "18CDE8982C084F0CA99E2EAE0FA3AE74", 00:14:21.533 "uuid": "18cde898-2c08-4f0c-a99e-2eae0fa3ae74", 00:14:21.533 "no_auto_visible": false 00:14:21.533 } 00:14:21.533 } 00:14:21.533 }, 00:14:21.533 { 00:14:21.533 "method": "nvmf_subsystem_add_listener", 00:14:21.533 "params": { 00:14:21.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.533 "listen_address": { 00:14:21.533 "trtype": "TCP", 00:14:21.533 "adrfam": "IPv4", 00:14:21.533 "traddr": "10.0.0.2", 00:14:21.533 "trsvcid": "4420" 00:14:21.533 }, 00:14:21.533 "secure_channel": false, 00:14:21.533 "sock_impl": "ssl" 00:14:21.533 } 00:14:21.533 } 00:14:21.533 ] 00:14:21.533 } 00:14:21.533 ] 00:14:21.533 }' 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85996 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85996 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85996 ']' 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.533 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:21.533 [2024-07-25 01:57:36.804668] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:21.533 [2024-07-25 01:57:36.804772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.791 [2024-07-25 01:57:36.930507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:21.791 [2024-07-25 01:57:36.942569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.791 [2024-07-25 01:57:36.980380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.791 [2024-07-25 01:57:36.980443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.791 [2024-07-25 01:57:36.980453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.791 [2024-07-25 01:57:36.980461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.791 [2024-07-25 01:57:36.980467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.791 [2024-07-25 01:57:36.980537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.049 [2024-07-25 01:57:37.124554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.049 [2024-07-25 01:57:37.178605] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.049 [2024-07-25 01:57:37.210554] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.049 [2024-07-25 01:57:37.220000] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=86028 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 86028 /var/tmp/bdevperf.sock 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 86028 ']' 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:22.642 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:22.642 "subsystems": [ 00:14:22.642 { 00:14:22.642 "subsystem": "keyring", 00:14:22.642 "config": [ 00:14:22.642 { 00:14:22.642 "method": "keyring_file_add_key", 00:14:22.642 "params": { 00:14:22.642 "name": "key0", 00:14:22.642 "path": "/tmp/tmp.jo7FAS4e6c" 00:14:22.642 } 00:14:22.642 } 00:14:22.642 ] 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "subsystem": "iobuf", 00:14:22.642 "config": [ 00:14:22.642 { 00:14:22.642 "method": "iobuf_set_options", 00:14:22.642 "params": { 00:14:22.642 "small_pool_count": 8192, 00:14:22.642 "large_pool_count": 1024, 00:14:22.642 "small_bufsize": 8192, 00:14:22.642 "large_bufsize": 135168 00:14:22.642 } 00:14:22.642 } 00:14:22.642 ] 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "subsystem": "sock", 00:14:22.642 "config": [ 00:14:22.642 { 00:14:22.642 "method": "sock_set_default_impl", 00:14:22.642 "params": { 00:14:22.642 "impl_name": "uring" 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "sock_impl_set_options", 00:14:22.642 "params": { 00:14:22.642 "impl_name": "ssl", 00:14:22.642 "recv_buf_size": 4096, 00:14:22.642 "send_buf_size": 4096, 00:14:22.642 "enable_recv_pipe": true, 00:14:22.642 "enable_quickack": false, 00:14:22.642 "enable_placement_id": 0, 00:14:22.642 "enable_zerocopy_send_server": true, 00:14:22.642 "enable_zerocopy_send_client": false, 00:14:22.642 "zerocopy_threshold": 0, 00:14:22.642 "tls_version": 0, 00:14:22.642 "enable_ktls": false 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "sock_impl_set_options", 00:14:22.642 "params": { 00:14:22.642 "impl_name": "posix", 00:14:22.642 "recv_buf_size": 2097152, 00:14:22.642 "send_buf_size": 2097152, 00:14:22.642 "enable_recv_pipe": true, 00:14:22.642 "enable_quickack": false, 00:14:22.642 "enable_placement_id": 0, 00:14:22.642 "enable_zerocopy_send_server": true, 00:14:22.642 "enable_zerocopy_send_client": false, 00:14:22.642 "zerocopy_threshold": 0, 00:14:22.642 "tls_version": 0, 00:14:22.642 "enable_ktls": false 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "sock_impl_set_options", 00:14:22.642 "params": { 00:14:22.642 "impl_name": "uring", 00:14:22.642 "recv_buf_size": 2097152, 00:14:22.642 "send_buf_size": 2097152, 00:14:22.642 "enable_recv_pipe": true, 00:14:22.642 "enable_quickack": false, 00:14:22.642 "enable_placement_id": 0, 00:14:22.642 "enable_zerocopy_send_server": false, 00:14:22.642 "enable_zerocopy_send_client": false, 00:14:22.642 "zerocopy_threshold": 0, 00:14:22.642 "tls_version": 0, 00:14:22.642 "enable_ktls": false 00:14:22.642 } 00:14:22.642 } 00:14:22.642 ] 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "subsystem": "vmd", 00:14:22.642 "config": [] 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "subsystem": "accel", 00:14:22.642 "config": [ 00:14:22.642 { 00:14:22.642 "method": "accel_set_options", 00:14:22.642 "params": { 00:14:22.642 "small_cache_size": 128, 00:14:22.642 "large_cache_size": 16, 00:14:22.642 "task_count": 2048, 00:14:22.642 "sequence_count": 2048, 00:14:22.642 "buf_count": 2048 00:14:22.642 } 00:14:22.642 } 00:14:22.642 ] 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "subsystem": "bdev", 00:14:22.642 "config": [ 00:14:22.642 { 00:14:22.642 "method": "bdev_set_options", 00:14:22.642 "params": { 00:14:22.642 "bdev_io_pool_size": 65535, 00:14:22.642 "bdev_io_cache_size": 256, 00:14:22.642 "bdev_auto_examine": true, 00:14:22.642 "iobuf_small_cache_size": 128, 00:14:22.642 "iobuf_large_cache_size": 16 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "bdev_raid_set_options", 00:14:22.642 "params": { 00:14:22.642 "process_window_size_kb": 1024, 00:14:22.642 "process_max_bandwidth_mb_sec": 0 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "bdev_iscsi_set_options", 00:14:22.642 "params": { 00:14:22.642 "timeout_sec": 30 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "bdev_nvme_set_options", 00:14:22.642 "params": { 00:14:22.642 "action_on_timeout": "none", 00:14:22.642 "timeout_us": 0, 00:14:22.642 "timeout_admin_us": 0, 00:14:22.642 "keep_alive_timeout_ms": 10000, 00:14:22.642 "arbitration_burst": 0, 00:14:22.642 "low_priority_weight": 0, 00:14:22.642 "medium_priority_weight": 0, 00:14:22.642 "high_priority_weight": 0, 00:14:22.642 "nvme_adminq_poll_period_us": 10000, 00:14:22.642 "nvme_ioq_poll_period_us": 0, 00:14:22.642 "io_queue_requests": 512, 00:14:22.642 "delay_cmd_submit": true, 00:14:22.642 "transport_retry_count": 4, 00:14:22.642 "bdev_retry_count": 3, 00:14:22.642 "transport_ack_timeout": 0, 00:14:22.642 "ctrlr_loss_timeout_sec": 0, 00:14:22.642 "reconnect_delay_sec": 0, 00:14:22.642 "fast_io_fail_timeout_sec": 0, 00:14:22.642 "disable_auto_failback": false, 00:14:22.642 "generate_uuids": false, 00:14:22.642 "transport_tos": 0, 00:14:22.642 "nvme_error_stat": false, 00:14:22.642 "rdma_srq_size": 0, 00:14:22.642 "io_path_stat": false, 00:14:22.642 "allow_accel_sequence": false, 00:14:22.642 "rdma_max_cq_size": 0, 00:14:22.642 "rdma_cm_event_timeout_ms": 0, 00:14:22.642 "dhchap_digests": [ 00:14:22.642 "sha256", 00:14:22.642 "sha384", 00:14:22.642 "sha512" 00:14:22.642 ], 00:14:22.642 "dhchap_dhgroups": [ 00:14:22.642 "null", 00:14:22.642 "ffdhe2048", 00:14:22.642 "ffdhe3072", 00:14:22.642 "ffdhe4096", 00:14:22.642 "ffdhe6144", 00:14:22.642 "ffdhe8192" 00:14:22.642 ] 00:14:22.642 } 00:14:22.642 }, 00:14:22.642 { 00:14:22.642 "method": "bdev_nvme_attach_controller", 00:14:22.642 "params": { 00:14:22.642 "name": "nvme0", 00:14:22.642 "trtype": "TCP", 00:14:22.642 "adrfam": "IPv4", 00:14:22.642 "traddr": "10.0.0.2", 00:14:22.643 "trsvcid": "4420", 00:14:22.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.643 "prchk_reftag": false, 00:14:22.643 "prchk_guard": false, 00:14:22.643 "ctrlr_loss_timeout_sec": 0, 00:14:22.643 "reconnect_delay_sec": 0, 00:14:22.643 "fast_io_fail_timeout_sec": 0, 00:14:22.643 "psk": "key0", 00:14:22.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.643 "hdgst": false, 00:14:22.643 "ddgst": false 00:14:22.643 } 00:14:22.643 }, 00:14:22.643 { 00:14:22.643 "method": "bdev_nvme_set_hotplug", 00:14:22.643 "params": { 00:14:22.643 "period_us": 100000, 00:14:22.643 "enable": false 00:14:22.643 } 00:14:22.643 }, 00:14:22.643 { 00:14:22.643 "method": "bdev_enable_histogram", 00:14:22.643 "params": { 00:14:22.643 "name": "nvme0n1", 00:14:22.643 "enable": true 00:14:22.643 } 00:14:22.643 }, 00:14:22.643 { 00:14:22.643 "method": "bdev_wait_for_examine" 00:14:22.643 } 00:14:22.643 ] 00:14:22.643 }, 00:14:22.643 { 00:14:22.643 "subsystem": "nbd", 00:14:22.643 "config": [] 00:14:22.643 } 00:14:22.643 ] 00:14:22.643 }' 00:14:22.643 [2024-07-25 01:57:37.864854] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:22.643 [2024-07-25 01:57:37.864954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86028 ] 00:14:22.901 [2024-07-25 01:57:37.988414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:22.901 [2024-07-25 01:57:38.003204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.901 [2024-07-25 01:57:38.044657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.901 [2024-07-25 01:57:38.153087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.901 [2024-07-25 01:57:38.179733] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.836 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.836 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:23.836 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:23.836 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:23.836 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.836 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.094 Running I/O for 1 seconds... 00:14:25.029 00:14:25.029 Latency(us) 00:14:25.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.029 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:25.029 Verification LBA range: start 0x0 length 0x2000 00:14:25.029 nvme0n1 : 1.02 4127.62 16.12 0.00 0.00 30682.89 6404.65 18945.86 00:14:25.029 =================================================================================================================== 00:14:25.029 Total : 4127.62 16.12 0.00 0.00 30682.89 6404.65 18945.86 00:14:25.029 0 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:25.029 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:25.029 nvmf_trace.0 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 86028 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 86028 ']' 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 86028 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86028 00:14:25.288 killing process with pid 86028 00:14:25.288 Received shutdown signal, test time was about 1.000000 seconds 00:14:25.288 00:14:25.288 Latency(us) 00:14:25.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.288 =================================================================================================================== 00:14:25.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86028' 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 86028 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 86028 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.288 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.547 rmmod nvme_tcp 00:14:25.547 rmmod nvme_fabrics 00:14:25.547 rmmod nvme_keyring 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85996 ']' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85996 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85996 ']' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85996 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85996 00:14:25.547 killing process with pid 85996 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85996' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85996 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85996 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.547 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.XxQDDgm7s4 /tmp/tmp.GYf9dqhHa2 /tmp/tmp.jo7FAS4e6c 00:14:25.807 00:14:25.807 real 1m17.887s 00:14:25.807 user 2m2.735s 00:14:25.807 sys 0m26.017s 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.807 ************************************ 00:14:25.807 END TEST nvmf_tls 00:14:25.807 ************************************ 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.807 ************************************ 00:14:25.807 START TEST nvmf_fips 00:14:25.807 ************************************ 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:25.807 * Looking for test storage... 00:14:25.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.807 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:25.807 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:25.808 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:26.067 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:26.068 Error setting digest 00:14:26.068 0082D7761A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:26.068 0082D7761A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:26.068 Cannot find device "nvmf_tgt_br" 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.068 Cannot find device "nvmf_tgt_br2" 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:26.068 Cannot find device "nvmf_tgt_br" 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:26.068 Cannot find device "nvmf_tgt_br2" 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.068 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.327 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:26.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:26.328 00:14:26.328 --- 10.0.0.2 ping statistics --- 00:14:26.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.328 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:26.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:14:26.328 00:14:26.328 --- 10.0.0.3 ping statistics --- 00:14:26.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.328 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:26.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:14:26.328 00:14:26.328 --- 10.0.0.1 ping statistics --- 00:14:26.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.328 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=86293 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 86293 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 86293 ']' 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.328 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 [2024-07-25 01:57:41.627439] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:26.587 [2024-07-25 01:57:41.627558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.587 [2024-07-25 01:57:41.754533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:26.587 [2024-07-25 01:57:41.772084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.587 [2024-07-25 01:57:41.808233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.587 [2024-07-25 01:57:41.808366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.587 [2024-07-25 01:57:41.808377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.587 [2024-07-25 01:57:41.808384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.587 [2024-07-25 01:57:41.808391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.587 [2024-07-25 01:57:41.808423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.587 [2024-07-25 01:57:41.839276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:26.587 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.587 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:26.587 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.587 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:26.587 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:26.846 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.106 [2024-07-25 01:57:42.198135] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.106 [2024-07-25 01:57:42.214071] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.106 [2024-07-25 01:57:42.214263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.106 [2024-07-25 01:57:42.239709] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:27.106 malloc0 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=86319 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 86319 /var/tmp/bdevperf.sock 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 86319 ']' 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.106 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.106 [2024-07-25 01:57:42.349921] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:27.106 [2024-07-25 01:57:42.350012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86319 ] 00:14:27.365 [2024-07-25 01:57:42.471309] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:27.365 [2024-07-25 01:57:42.490773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.365 [2024-07-25 01:57:42.533741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.365 [2024-07-25 01:57:42.569108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:28.300 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.300 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:28.300 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.300 [2024-07-25 01:57:43.550718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.300 [2024-07-25 01:57:43.550833] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:28.558 TLSTESTn1 00:14:28.558 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.558 Running I/O for 10 seconds... 00:14:38.549 00:14:38.549 Latency(us) 00:14:38.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.549 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:38.549 Verification LBA range: start 0x0 length 0x2000 00:14:38.549 TLSTESTn1 : 10.01 4269.78 16.68 0.00 0.00 29923.76 5332.25 31933.91 00:14:38.549 =================================================================================================================== 00:14:38.549 Total : 4269.78 16.68 0.00 0.00 29923.76 5332.25 31933.91 00:14:38.549 0 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:38.549 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:38.549 nvmf_trace.0 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86319 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 86319 ']' 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 86319 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86319 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:38.808 killing process with pid 86319 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86319' 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 86319 00:14:38.808 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.808 00:14:38.808 Latency(us) 00:14:38.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.808 =================================================================================================================== 00:14:38.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.808 [2024-07-25 01:57:53.901172] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:38.808 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 86319 00:14:38.808 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:38.808 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.808 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:38.808 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.808 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:38.809 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.809 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.067 rmmod nvme_tcp 00:14:39.067 rmmod nvme_fabrics 00:14:39.067 rmmod nvme_keyring 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 86293 ']' 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 86293 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 86293 ']' 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 86293 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86293 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:39.067 killing process with pid 86293 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86293' 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 86293 00:14:39.067 [2024-07-25 01:57:54.181748] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 86293 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:39.067 00:14:39.067 real 0m13.449s 00:14:39.067 user 0m18.510s 00:14:39.067 sys 0m5.673s 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.067 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:39.067 ************************************ 00:14:39.067 END TEST nvmf_fips 00:14:39.067 ************************************ 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:39.326 ************************************ 00:14:39.326 START TEST nvmf_fuzz 00:14:39.326 ************************************ 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:14:39.326 * Looking for test storage... 00:14:39.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.326 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:39.327 Cannot find device "nvmf_tgt_br" 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.327 Cannot find device "nvmf_tgt_br2" 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:39.327 Cannot find device "nvmf_tgt_br" 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:39.327 Cannot find device "nvmf_tgt_br2" 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:14:39.327 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:39.586 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:39.587 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:39.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:39.846 00:14:39.846 --- 10.0.0.2 ping statistics --- 00:14:39.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.846 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:39.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:39.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:39.846 00:14:39.846 --- 10.0.0.3 ping statistics --- 00:14:39.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.846 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:39.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:39.846 00:14:39.846 --- 10.0.0.1 ping statistics --- 00:14:39.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.846 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86645 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86645 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 86645 ']' 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.846 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.104 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.104 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:14:40.104 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.104 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.104 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.105 Malloc0 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:14:40.105 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:14:40.363 Shutting down the fuzz application 00:14:40.363 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:14:40.621 Shutting down the fuzz application 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.621 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:14:40.880 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.880 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:14:40.880 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.880 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.880 rmmod nvme_tcp 00:14:40.880 rmmod nvme_fabrics 00:14:40.880 rmmod nvme_keyring 00:14:40.880 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 86645 ']' 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 86645 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 86645 ']' 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 86645 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86645 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.880 killing process with pid 86645 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86645' 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 86645 00:14:40.880 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 86645 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:14:41.139 00:14:41.139 real 0m1.823s 00:14:41.139 user 0m1.658s 00:14:41.139 sys 0m0.552s 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.139 ************************************ 00:14:41.139 END TEST nvmf_fuzz 00:14:41.139 ************************************ 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.139 ************************************ 00:14:41.139 START TEST nvmf_multiconnection 00:14:41.139 ************************************ 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:14:41.139 * Looking for test storage... 00:14:41.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.139 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:41.140 Cannot find device "nvmf_tgt_br" 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.140 Cannot find device "nvmf_tgt_br2" 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:14:41.140 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:41.398 Cannot find device "nvmf_tgt_br" 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:41.398 Cannot find device "nvmf_tgt_br2" 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:41.398 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.399 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:41.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:41.399 00:14:41.399 --- 10.0.0.2 ping statistics --- 00:14:41.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.399 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:41.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:41.657 00:14:41.657 --- 10.0.0.3 ping statistics --- 00:14:41.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.657 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:41.657 00:14:41.657 --- 10.0.0.1 ping statistics --- 00:14:41.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.657 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=86821 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 86821 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 86821 ']' 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.657 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.657 [2024-07-25 01:57:56.779494] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:14:41.658 [2024-07-25 01:57:56.779591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.658 [2024-07-25 01:57:56.898153] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:41.658 [2024-07-25 01:57:56.914353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.658 [2024-07-25 01:57:56.948080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.658 [2024-07-25 01:57:56.948380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.658 [2024-07-25 01:57:56.948513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.658 [2024-07-25 01:57:56.948563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.658 [2024-07-25 01:57:56.948590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.658 [2024-07-25 01:57:56.948822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.658 [2024-07-25 01:57:56.948970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.658 [2024-07-25 01:57:56.949512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.658 [2024-07-25 01:57:56.949522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.916 [2024-07-25 01:57:56.978936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.916 [2024-07-25 01:57:57.114533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.916 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 Malloc1 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 [2024-07-25 01:57:57.191911] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 Malloc2 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.917 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 Malloc3 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 Malloc4 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 Malloc5 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 Malloc6 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 Malloc7 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.185 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 Malloc8 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 Malloc9 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 Malloc10 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:14:42.457 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.458 Malloc11 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:42.458 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.716 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:14:42.716 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.716 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.716 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:42.716 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:44.617 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:14:44.876 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:14:44.876 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:44.876 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.876 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:44.876 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:46.776 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:14:46.776 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:14:46.776 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.776 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.776 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:46.776 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:51.204 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:53.133 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:14:53.391 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:14:53.391 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.391 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.391 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.391 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:55.291 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:14:55.549 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:14:55.549 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:55.549 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.549 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:55.549 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:57.449 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:14:57.707 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:14:57.707 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:57.707 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.707 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:57.707 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:59.607 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:14:59.864 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:14:59.864 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.864 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.864 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:59.864 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:01.780 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:15:02.038 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:02.038 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.038 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.038 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:02.038 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:03.935 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:15:04.193 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:15:04.193 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:04.193 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.193 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:04.193 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:06.089 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:06.089 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:06.089 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:15:06.347 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:06.347 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.347 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:06.347 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:15:06.347 [global] 00:15:06.347 thread=1 00:15:06.347 invalidate=1 00:15:06.347 rw=read 00:15:06.347 time_based=1 00:15:06.347 runtime=10 00:15:06.347 ioengine=libaio 00:15:06.347 direct=1 00:15:06.347 bs=262144 00:15:06.347 iodepth=64 00:15:06.347 norandommap=1 00:15:06.347 numjobs=1 00:15:06.347 00:15:06.347 [job0] 00:15:06.347 filename=/dev/nvme0n1 00:15:06.347 [job1] 00:15:06.347 filename=/dev/nvme10n1 00:15:06.347 [job2] 00:15:06.347 filename=/dev/nvme1n1 00:15:06.347 [job3] 00:15:06.347 filename=/dev/nvme2n1 00:15:06.347 [job4] 00:15:06.347 filename=/dev/nvme3n1 00:15:06.347 [job5] 00:15:06.347 filename=/dev/nvme4n1 00:15:06.347 [job6] 00:15:06.347 filename=/dev/nvme5n1 00:15:06.347 [job7] 00:15:06.347 filename=/dev/nvme6n1 00:15:06.347 [job8] 00:15:06.347 filename=/dev/nvme7n1 00:15:06.347 [job9] 00:15:06.347 filename=/dev/nvme8n1 00:15:06.347 [job10] 00:15:06.347 filename=/dev/nvme9n1 00:15:06.347 Could not set queue depth (nvme0n1) 00:15:06.347 Could not set queue depth (nvme10n1) 00:15:06.347 Could not set queue depth (nvme1n1) 00:15:06.347 Could not set queue depth (nvme2n1) 00:15:06.347 Could not set queue depth (nvme3n1) 00:15:06.347 Could not set queue depth (nvme4n1) 00:15:06.347 Could not set queue depth (nvme5n1) 00:15:06.347 Could not set queue depth (nvme6n1) 00:15:06.347 Could not set queue depth (nvme7n1) 00:15:06.347 Could not set queue depth (nvme8n1) 00:15:06.347 Could not set queue depth (nvme9n1) 00:15:06.605 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:06.605 fio-3.35 00:15:06.605 Starting 11 threads 00:15:18.805 00:15:18.805 job0: (groupid=0, jobs=1): err= 0: pid=87272: Thu Jul 25 01:58:32 2024 00:15:18.805 read: IOPS=634, BW=159MiB/s (166MB/s)(1591MiB/10026msec) 00:15:18.805 slat (usec): min=17, max=93579, avg=1548.67, stdev=3727.81 00:15:18.805 clat (msec): min=7, max=179, avg=99.15, stdev=18.24 00:15:18.805 lat (msec): min=7, max=228, avg=100.70, stdev=18.54 00:15:18.805 clat percentiles (msec): 00:15:18.805 | 1.00th=[ 45], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 89], 00:15:18.805 | 30.00th=[ 91], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 99], 00:15:18.805 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 123], 95.00th=[ 138], 00:15:18.805 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 176], 00:15:18.805 | 99.99th=[ 180] 00:15:18.805 bw ( KiB/s): min=104657, max=183808, per=8.39%, avg=161290.40, stdev=21660.24, samples=20 00:15:18.805 iops : min= 408, max= 718, avg=629.90, stdev=84.68, samples=20 00:15:18.805 lat (msec) : 10=0.02%, 20=0.14%, 50=1.04%, 100=64.56%, 250=34.25% 00:15:18.805 cpu : usr=0.38%, sys=2.33%, ctx=1511, majf=0, minf=4097 00:15:18.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:18.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.805 issued rwts: total=6365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.805 job1: (groupid=0, jobs=1): err= 0: pid=87274: Thu Jul 25 01:58:32 2024 00:15:18.805 read: IOPS=1553, BW=388MiB/s (407MB/s)(3919MiB/10094msec) 00:15:18.805 slat (usec): min=17, max=63894, avg=632.36, stdev=1696.33 00:15:18.805 clat (msec): min=5, max=212, avg=40.51, stdev=25.13 00:15:18.805 lat (msec): min=5, max=212, avg=41.14, stdev=25.49 00:15:18.805 clat percentiles (msec): 00:15:18.805 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:15:18.805 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:15:18.805 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 116], 00:15:18.805 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 194], 99.95th=[ 211], 00:15:18.805 | 99.99th=[ 213] 00:15:18.805 bw ( KiB/s): min=117760, max=504846, per=20.79%, avg=399672.95, stdev=158524.82, samples=20 00:15:18.805 iops : min= 460, max= 1972, avg=1561.15, stdev=619.31, samples=20 00:15:18.805 lat (msec) : 10=0.03%, 20=0.22%, 50=91.04%, 100=0.23%, 250=8.48% 00:15:18.805 cpu : usr=0.67%, sys=4.49%, ctx=3288, majf=0, minf=4097 00:15:18.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:18.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.805 issued rwts: total=15676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.805 job2: (groupid=0, jobs=1): err= 0: pid=87275: Thu Jul 25 01:58:32 2024 00:15:18.805 read: IOPS=516, BW=129MiB/s (135MB/s)(1303MiB/10095msec) 00:15:18.805 slat (usec): min=17, max=28945, avg=1913.21, stdev=4105.03 00:15:18.805 clat (msec): min=45, max=217, avg=121.83, stdev=10.62 00:15:18.805 lat (msec): min=45, max=222, avg=123.74, stdev=10.87 00:15:18.805 clat percentiles (msec): 00:15:18.805 | 1.00th=[ 79], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 116], 00:15:18.805 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 124], 00:15:18.805 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 132], 95.00th=[ 136], 00:15:18.806 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 211], 99.95th=[ 211], 00:15:18.806 | 99.99th=[ 218] 00:15:18.806 bw ( KiB/s): min=126722, max=138240, per=6.86%, avg=131800.15, stdev=3639.37, samples=20 00:15:18.806 iops : min= 495, max= 540, avg=514.75, stdev=14.12, samples=20 00:15:18.806 lat (msec) : 50=0.08%, 100=1.38%, 250=98.54% 00:15:18.806 cpu : usr=0.32%, sys=2.16%, ctx=1287, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=5213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job3: (groupid=0, jobs=1): err= 0: pid=87276: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=512, BW=128MiB/s (134MB/s)(1295MiB/10101msec) 00:15:18.806 slat (usec): min=18, max=69434, avg=1927.88, stdev=4261.63 00:15:18.806 clat (msec): min=25, max=213, avg=122.66, stdev= 9.68 00:15:18.806 lat (msec): min=25, max=213, avg=124.59, stdev= 9.90 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 117], 00:15:18.806 | 30.00th=[ 120], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:15:18.806 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 136], 00:15:18.806 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 199], 99.95th=[ 207], 00:15:18.806 | 99.99th=[ 213] 00:15:18.806 bw ( KiB/s): min=117760, max=137964, per=6.81%, avg=130943.35, stdev=5172.03, samples=20 00:15:18.806 iops : min= 460, max= 538, avg=511.45, stdev=20.14, samples=20 00:15:18.806 lat (msec) : 50=0.19%, 100=0.35%, 250=99.46% 00:15:18.806 cpu : usr=0.23%, sys=2.07%, ctx=1264, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=5179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job4: (groupid=0, jobs=1): err= 0: pid=87277: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=810, BW=203MiB/s (213MB/s)(2033MiB/10025msec) 00:15:18.806 slat (usec): min=19, max=19967, avg=1225.45, stdev=2864.47 00:15:18.806 clat (msec): min=7, max=130, avg=77.54, stdev=26.06 00:15:18.806 lat (msec): min=7, max=133, avg=78.77, stdev=26.44 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 44], 00:15:18.806 | 30.00th=[ 65], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 93], 00:15:18.806 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 104], 95.00th=[ 107], 00:15:18.806 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 128], 00:15:18.806 | 99.99th=[ 131] 00:15:18.806 bw ( KiB/s): min=163328, max=446332, per=10.75%, avg=206542.90, stdev=85820.52, samples=20 00:15:18.806 iops : min= 638, max= 1743, avg=806.70, stdev=335.20, samples=20 00:15:18.806 lat (msec) : 10=0.02%, 20=0.58%, 50=20.15%, 100=62.74%, 250=16.51% 00:15:18.806 cpu : usr=0.51%, sys=3.03%, ctx=1720, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=8130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job5: (groupid=0, jobs=1): err= 0: pid=87278: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=515, BW=129MiB/s (135MB/s)(1301MiB/10095msec) 00:15:18.806 slat (usec): min=18, max=53554, avg=1915.81, stdev=4188.60 00:15:18.806 clat (msec): min=47, max=212, avg=122.08, stdev=10.45 00:15:18.806 lat (msec): min=47, max=212, avg=124.00, stdev=10.64 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 116], 00:15:18.806 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 124], 00:15:18.806 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 132], 95.00th=[ 136], 00:15:18.806 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 209], 99.95th=[ 213], 00:15:18.806 | 99.99th=[ 213] 00:15:18.806 bw ( KiB/s): min=126211, max=138752, per=6.85%, avg=131583.45, stdev=3777.10, samples=20 00:15:18.806 iops : min= 493, max= 542, avg=513.95, stdev=14.71, samples=20 00:15:18.806 lat (msec) : 50=0.29%, 100=0.44%, 250=99.27% 00:15:18.806 cpu : usr=0.33%, sys=2.39%, ctx=1233, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=5204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job6: (groupid=0, jobs=1): err= 0: pid=87279: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=688, BW=172MiB/s (181MB/s)(1727MiB/10027msec) 00:15:18.806 slat (usec): min=17, max=85778, avg=1425.32, stdev=3586.79 00:15:18.806 clat (usec): min=1425, max=184742, avg=91402.98, stdev=22658.76 00:15:18.806 lat (usec): min=1474, max=221978, avg=92828.31, stdev=23006.99 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 18], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 80], 00:15:18.806 | 30.00th=[ 87], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 95], 00:15:18.806 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 136], 00:15:18.806 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:15:18.806 | 99.99th=[ 186] 00:15:18.806 bw ( KiB/s): min=103936, max=270336, per=9.11%, avg=175148.00, stdev=34162.87, samples=20 00:15:18.806 iops : min= 406, max= 1056, avg=684.10, stdev=133.47, samples=20 00:15:18.806 lat (msec) : 2=0.01%, 10=0.36%, 20=0.83%, 50=0.75%, 100=72.43% 00:15:18.806 lat (msec) : 250=25.62% 00:15:18.806 cpu : usr=0.30%, sys=2.62%, ctx=1498, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=6906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job7: (groupid=0, jobs=1): err= 0: pid=87280: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=596, BW=149MiB/s (156MB/s)(1505MiB/10096msec) 00:15:18.806 slat (usec): min=20, max=57995, avg=1652.57, stdev=3714.98 00:15:18.806 clat (msec): min=15, max=209, avg=105.45, stdev=18.80 00:15:18.806 lat (msec): min=17, max=209, avg=107.10, stdev=19.04 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 91], 00:15:18.806 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 108], 00:15:18.806 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 126], 95.00th=[ 142], 00:15:18.806 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 194], 99.95th=[ 205], 00:15:18.806 | 99.99th=[ 211] 00:15:18.806 bw ( KiB/s): min=105472, max=176128, per=7.94%, avg=152510.90, stdev=22605.54, samples=20 00:15:18.806 iops : min= 412, max= 688, avg=595.65, stdev=88.30, samples=20 00:15:18.806 lat (msec) : 20=0.08%, 50=0.13%, 100=46.49%, 250=53.30% 00:15:18.806 cpu : usr=0.40%, sys=2.53%, ctx=1393, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=6021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job8: (groupid=0, jobs=1): err= 0: pid=87281: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=592, BW=148MiB/s (155MB/s)(1495MiB/10090msec) 00:15:18.806 slat (usec): min=17, max=113208, avg=1667.68, stdev=4090.58 00:15:18.806 clat (msec): min=33, max=197, avg=106.15, stdev=19.24 00:15:18.806 lat (msec): min=33, max=244, avg=107.81, stdev=19.51 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 91], 00:15:18.806 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 108], 00:15:18.806 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 129], 95.00th=[ 142], 00:15:18.806 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 199], 00:15:18.806 | 99.99th=[ 199] 00:15:18.806 bw ( KiB/s): min=85844, max=176287, per=7.88%, avg=151473.90, stdev=24879.58, samples=20 00:15:18.806 iops : min= 335, max= 688, avg=591.50, stdev=97.10, samples=20 00:15:18.806 lat (msec) : 50=0.20%, 100=46.29%, 250=53.51% 00:15:18.806 cpu : usr=0.33%, sys=2.25%, ctx=1405, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.806 issued rwts: total=5980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.806 job9: (groupid=0, jobs=1): err= 0: pid=87282: Thu Jul 25 01:58:32 2024 00:15:18.806 read: IOPS=591, BW=148MiB/s (155MB/s)(1494MiB/10097msec) 00:15:18.806 slat (usec): min=17, max=79407, avg=1668.56, stdev=3981.63 00:15:18.806 clat (msec): min=26, max=208, avg=106.26, stdev=18.80 00:15:18.806 lat (msec): min=26, max=208, avg=107.93, stdev=19.09 00:15:18.806 clat percentiles (msec): 00:15:18.806 | 1.00th=[ 79], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 91], 00:15:18.806 | 30.00th=[ 95], 40.00th=[ 99], 50.00th=[ 103], 60.00th=[ 109], 00:15:18.806 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 144], 00:15:18.806 | 99.00th=[ 165], 99.50th=[ 182], 99.90th=[ 207], 99.95th=[ 209], 00:15:18.806 | 99.99th=[ 209] 00:15:18.806 bw ( KiB/s): min=101376, max=177152, per=7.88%, avg=151377.50, stdev=23635.46, samples=20 00:15:18.806 iops : min= 396, max= 692, avg=591.20, stdev=92.41, samples=20 00:15:18.806 lat (msec) : 50=0.15%, 100=45.14%, 250=54.71% 00:15:18.806 cpu : usr=0.24%, sys=2.26%, ctx=1377, majf=0, minf=4097 00:15:18.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:18.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.807 issued rwts: total=5977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.807 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.807 job10: (groupid=0, jobs=1): err= 0: pid=87283: Thu Jul 25 01:58:32 2024 00:15:18.807 read: IOPS=513, BW=128MiB/s (135MB/s)(1296MiB/10093msec) 00:15:18.807 slat (usec): min=17, max=70497, avg=1926.23, stdev=4334.21 00:15:18.807 clat (msec): min=65, max=209, avg=122.55, stdev=10.15 00:15:18.807 lat (msec): min=65, max=212, avg=124.48, stdev=10.42 00:15:18.807 clat percentiles (msec): 00:15:18.807 | 1.00th=[ 78], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 117], 00:15:18.807 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 125], 00:15:18.807 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 136], 00:15:18.807 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 199], 99.95th=[ 199], 00:15:18.807 | 99.99th=[ 211] 00:15:18.807 bw ( KiB/s): min=124678, max=141824, per=6.82%, avg=131057.85, stdev=4400.37, samples=20 00:15:18.807 iops : min= 487, max= 554, avg=511.75, stdev=17.16, samples=20 00:15:18.807 lat (msec) : 100=1.74%, 250=98.26% 00:15:18.807 cpu : usr=0.22%, sys=1.89%, ctx=1251, majf=0, minf=4097 00:15:18.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:18.807 issued rwts: total=5185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.807 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.807 00:15:18.807 Run status group 0 (all jobs): 00:15:18.807 READ: bw=1877MiB/s (1968MB/s), 128MiB/s-388MiB/s (134MB/s-407MB/s), io=18.5GiB (19.9GB), run=10025-10101msec 00:15:18.807 00:15:18.807 Disk stats (read/write): 00:15:18.807 nvme0n1: ios=12647/0, merge=0/0, ticks=1238622/0, in_queue=1238622, util=97.82% 00:15:18.807 nvme10n1: ios=31253/0, merge=0/0, ticks=1236038/0, in_queue=1236038, util=98.04% 00:15:18.807 nvme1n1: ios=10324/0, merge=0/0, ticks=1231102/0, in_queue=1231102, util=98.13% 00:15:18.807 nvme2n1: ios=10241/0, merge=0/0, ticks=1229316/0, in_queue=1229316, util=98.27% 00:15:18.807 nvme3n1: ios=16177/0, merge=0/0, ticks=1238093/0, in_queue=1238093, util=98.24% 00:15:18.807 nvme4n1: ios=10306/0, merge=0/0, ticks=1230860/0, in_queue=1230860, util=98.54% 00:15:18.807 nvme5n1: ios=13729/0, merge=0/0, ticks=1238816/0, in_queue=1238816, util=98.68% 00:15:18.807 nvme6n1: ios=11928/0, merge=0/0, ticks=1231064/0, in_queue=1231064, util=98.65% 00:15:18.807 nvme7n1: ios=11841/0, merge=0/0, ticks=1233941/0, in_queue=1233941, util=98.84% 00:15:18.807 nvme8n1: ios=11840/0, merge=0/0, ticks=1233484/0, in_queue=1233484, util=99.02% 00:15:18.807 nvme9n1: ios=10251/0, merge=0/0, ticks=1230984/0, in_queue=1230984, util=99.14% 00:15:18.807 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:15:18.807 [global] 00:15:18.807 thread=1 00:15:18.807 invalidate=1 00:15:18.807 rw=randwrite 00:15:18.807 time_based=1 00:15:18.807 runtime=10 00:15:18.807 ioengine=libaio 00:15:18.807 direct=1 00:15:18.807 bs=262144 00:15:18.807 iodepth=64 00:15:18.807 norandommap=1 00:15:18.807 numjobs=1 00:15:18.807 00:15:18.807 [job0] 00:15:18.807 filename=/dev/nvme0n1 00:15:18.807 [job1] 00:15:18.807 filename=/dev/nvme10n1 00:15:18.807 [job2] 00:15:18.807 filename=/dev/nvme1n1 00:15:18.807 [job3] 00:15:18.807 filename=/dev/nvme2n1 00:15:18.807 [job4] 00:15:18.807 filename=/dev/nvme3n1 00:15:18.807 [job5] 00:15:18.807 filename=/dev/nvme4n1 00:15:18.807 [job6] 00:15:18.807 filename=/dev/nvme5n1 00:15:18.807 [job7] 00:15:18.807 filename=/dev/nvme6n1 00:15:18.807 [job8] 00:15:18.807 filename=/dev/nvme7n1 00:15:18.807 [job9] 00:15:18.807 filename=/dev/nvme8n1 00:15:18.807 [job10] 00:15:18.807 filename=/dev/nvme9n1 00:15:18.807 Could not set queue depth (nvme0n1) 00:15:18.807 Could not set queue depth (nvme10n1) 00:15:18.807 Could not set queue depth (nvme1n1) 00:15:18.807 Could not set queue depth (nvme2n1) 00:15:18.807 Could not set queue depth (nvme3n1) 00:15:18.807 Could not set queue depth (nvme4n1) 00:15:18.807 Could not set queue depth (nvme5n1) 00:15:18.807 Could not set queue depth (nvme6n1) 00:15:18.807 Could not set queue depth (nvme7n1) 00:15:18.807 Could not set queue depth (nvme8n1) 00:15:18.807 Could not set queue depth (nvme9n1) 00:15:18.807 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.807 fio-3.35 00:15:18.807 Starting 11 threads 00:15:28.837 00:15:28.837 job0: (groupid=0, jobs=1): err= 0: pid=87481: Thu Jul 25 01:58:42 2024 00:15:28.837 write: IOPS=323, BW=80.9MiB/s (84.8MB/s)(822MiB/10163msec); 0 zone resets 00:15:28.837 slat (usec): min=20, max=78848, avg=3035.35, stdev=5411.28 00:15:28.837 clat (msec): min=83, max=356, avg=194.69, stdev=15.79 00:15:28.837 lat (msec): min=83, max=356, avg=197.72, stdev=15.06 00:15:28.837 clat percentiles (msec): 00:15:28.837 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:15:28.837 | 30.00th=[ 194], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 197], 00:15:28.837 | 70.00th=[ 199], 80.00th=[ 199], 90.00th=[ 201], 95.00th=[ 203], 00:15:28.837 | 99.00th=[ 259], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 355], 00:15:28.837 | 99.99th=[ 355] 00:15:28.837 bw ( KiB/s): min=69632, max=83968, per=5.90%, avg=82518.60, stdev=3307.33, samples=20 00:15:28.837 iops : min= 272, max= 328, avg=322.25, stdev=12.92, samples=20 00:15:28.837 lat (msec) : 100=0.24%, 250=98.72%, 500=1.03% 00:15:28.837 cpu : usr=0.47%, sys=1.12%, ctx=2944, majf=0, minf=1 00:15:28.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:28.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.837 issued rwts: total=0,3288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.837 job1: (groupid=0, jobs=1): err= 0: pid=87482: Thu Jul 25 01:58:42 2024 00:15:28.837 write: IOPS=453, BW=113MiB/s (119MB/s)(1149MiB/10132msec); 0 zone resets 00:15:28.838 slat (usec): min=17, max=11858, avg=2155.46, stdev=3889.11 00:15:28.838 clat (msec): min=6, max=282, avg=138.92, stdev=39.65 00:15:28.838 lat (msec): min=6, max=282, avg=141.07, stdev=40.09 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 46], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 146], 00:15:28.838 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:15:28.838 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:15:28.838 | 99.00th=[ 174], 99.50th=[ 224], 99.90th=[ 275], 99.95th=[ 275], 00:15:28.838 | 99.99th=[ 284] 00:15:28.838 bw ( KiB/s): min=100352, max=288256, per=8.29%, avg=115998.50, stdev=43161.21, samples=20 00:15:28.838 iops : min= 392, max= 1126, avg=453.10, stdev=168.60, samples=20 00:15:28.838 lat (msec) : 10=0.09%, 20=0.26%, 50=0.78%, 100=16.06%, 250=82.50% 00:15:28.838 lat (msec) : 500=0.30% 00:15:28.838 cpu : usr=0.87%, sys=1.30%, ctx=5974, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,4595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job2: (groupid=0, jobs=1): err= 0: pid=87494: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=401, BW=100MiB/s (105MB/s)(1016MiB/10133msec); 0 zone resets 00:15:28.838 slat (usec): min=18, max=81423, avg=2453.59, stdev=4383.15 00:15:28.838 clat (msec): min=84, max=290, avg=157.01, stdev=11.32 00:15:28.838 lat (msec): min=84, max=290, avg=159.46, stdev=10.61 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 140], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:15:28.838 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:15:28.838 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 165], 00:15:28.838 | 99.00th=[ 201], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 279], 00:15:28.838 | 99.99th=[ 292] 00:15:28.838 bw ( KiB/s): min=86016, max=104448, per=7.32%, avg=102420.25, stdev=4070.98, samples=20 00:15:28.838 iops : min= 336, max= 408, avg=400.05, stdev=15.90, samples=20 00:15:28.838 lat (msec) : 100=0.20%, 250=99.38%, 500=0.42% 00:15:28.838 cpu : usr=0.90%, sys=1.06%, ctx=3434, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,4065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job3: (groupid=0, jobs=1): err= 0: pid=87495: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=403, BW=101MiB/s (106MB/s)(1024MiB/10141msec); 0 zone resets 00:15:28.838 slat (usec): min=18, max=28917, avg=2438.82, stdev=4234.20 00:15:28.838 clat (msec): min=30, max=294, avg=155.96, stdev=15.08 00:15:28.838 lat (msec): min=30, max=294, avg=158.40, stdev=14.69 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 96], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:15:28.838 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:15:28.838 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 163], 00:15:28.838 | 99.00th=[ 197], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:15:28.838 | 99.99th=[ 296] 00:15:28.838 bw ( KiB/s): min=98501, max=104448, per=7.38%, avg=103223.70, stdev=1744.00, samples=20 00:15:28.838 iops : min= 384, max= 408, avg=403.15, stdev= 6.93, samples=20 00:15:28.838 lat (msec) : 50=0.39%, 100=0.68%, 250=98.49%, 500=0.44% 00:15:28.838 cpu : usr=0.66%, sys=0.67%, ctx=4308, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job4: (groupid=0, jobs=1): err= 0: pid=87496: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=403, BW=101MiB/s (106MB/s)(1023MiB/10142msec); 0 zone resets 00:15:28.838 slat (usec): min=19, max=49696, avg=2437.45, stdev=4254.74 00:15:28.838 clat (msec): min=12, max=295, avg=156.08, stdev=17.14 00:15:28.838 lat (msec): min=12, max=295, avg=158.52, stdev=16.86 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 68], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:15:28.838 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:15:28.838 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 167], 00:15:28.838 | 99.00th=[ 199], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:15:28.838 | 99.99th=[ 296] 00:15:28.838 bw ( KiB/s): min=99129, max=104448, per=7.37%, avg=103126.90, stdev=1497.18, samples=20 00:15:28.838 iops : min= 387, max= 408, avg=402.80, stdev= 5.87, samples=20 00:15:28.838 lat (msec) : 20=0.20%, 50=0.49%, 100=0.78%, 250=98.09%, 500=0.44% 00:15:28.838 cpu : usr=0.82%, sys=1.21%, ctx=3900, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,4093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job5: (groupid=0, jobs=1): err= 0: pid=87497: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=327, BW=81.9MiB/s (85.8MB/s)(833MiB/10178msec); 0 zone resets 00:15:28.838 slat (usec): min=17, max=20865, avg=2996.23, stdev=5189.05 00:15:28.838 clat (msec): min=7, max=366, avg=192.29, stdev=21.82 00:15:28.838 lat (msec): min=7, max=366, avg=195.29, stdev=21.51 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 87], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:15:28.838 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 197], 00:15:28.838 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 199], 95.00th=[ 201], 00:15:28.838 | 99.00th=[ 271], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:15:28.838 | 99.99th=[ 368] 00:15:28.838 bw ( KiB/s): min=81920, max=88064, per=5.99%, avg=83753.05, stdev=1368.44, samples=20 00:15:28.838 iops : min= 320, max= 344, avg=326.80, stdev= 5.40, samples=20 00:15:28.838 lat (msec) : 10=0.12%, 50=0.36%, 100=0.72%, 250=97.66%, 500=1.14% 00:15:28.838 cpu : usr=0.69%, sys=0.91%, ctx=3749, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,3333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job6: (groupid=0, jobs=1): err= 0: pid=87498: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=325, BW=81.4MiB/s (85.4MB/s)(828MiB/10171msec); 0 zone resets 00:15:28.838 slat (usec): min=19, max=52753, avg=3014.96, stdev=5262.71 00:15:28.838 clat (msec): min=55, max=359, avg=193.45, stdev=17.37 00:15:28.838 lat (msec): min=55, max=359, avg=196.47, stdev=16.80 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 140], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:15:28.838 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 197], 00:15:28.838 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 199], 95.00th=[ 201], 00:15:28.838 | 99.00th=[ 262], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 359], 00:15:28.838 | 99.99th=[ 359] 00:15:28.838 bw ( KiB/s): min=75624, max=86016, per=5.94%, avg=83150.00, stdev=2020.03, samples=20 00:15:28.838 iops : min= 295, max= 336, avg=324.75, stdev= 7.96, samples=20 00:15:28.838 lat (msec) : 100=0.60%, 250=98.37%, 500=1.03% 00:15:28.838 cpu : usr=0.74%, sys=0.92%, ctx=3400, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,3312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job7: (groupid=0, jobs=1): err= 0: pid=87504: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=326, BW=81.6MiB/s (85.6MB/s)(830MiB/10173msec); 0 zone resets 00:15:28.838 slat (usec): min=19, max=36861, avg=3006.87, stdev=5221.78 00:15:28.838 clat (msec): min=21, max=363, avg=193.02, stdev=21.01 00:15:28.838 lat (msec): min=21, max=363, avg=196.03, stdev=20.66 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 92], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:15:28.838 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 197], 00:15:28.838 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 199], 95.00th=[ 203], 00:15:28.838 | 99.00th=[ 268], 99.50th=[ 317], 99.90th=[ 351], 99.95th=[ 363], 00:15:28.838 | 99.99th=[ 363] 00:15:28.838 bw ( KiB/s): min=81920, max=83968, per=5.96%, avg=83354.00, stdev=914.09, samples=20 00:15:28.838 iops : min= 320, max= 328, avg=325.55, stdev= 3.55, samples=20 00:15:28.838 lat (msec) : 50=0.48%, 100=0.60%, 250=97.77%, 500=1.14% 00:15:28.838 cpu : usr=0.58%, sys=1.09%, ctx=3781, majf=0, minf=1 00:15:28.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:28.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.838 issued rwts: total=0,3320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.838 job8: (groupid=0, jobs=1): err= 0: pid=87506: Thu Jul 25 01:58:42 2024 00:15:28.838 write: IOPS=1104, BW=276MiB/s (289MB/s)(2775MiB/10051msec); 0 zone resets 00:15:28.838 slat (usec): min=14, max=7697, avg=897.25, stdev=1506.82 00:15:28.838 clat (msec): min=10, max=109, avg=57.05, stdev= 3.51 00:15:28.838 lat (msec): min=10, max=109, avg=57.94, stdev= 3.26 00:15:28.838 clat percentiles (msec): 00:15:28.838 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:15:28.838 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:15:28.838 | 70.00th=[ 59], 80.00th=[ 59], 90.00th=[ 59], 95.00th=[ 60], 00:15:28.838 | 99.00th=[ 61], 99.50th=[ 65], 99.90th=[ 99], 99.95th=[ 103], 00:15:28.839 | 99.99th=[ 106] 00:15:28.839 bw ( KiB/s): min=276480, max=285184, per=20.19%, avg=282436.60, stdev=2323.55, samples=20 00:15:28.839 iops : min= 1080, max= 1114, avg=1103.15, stdev= 9.05, samples=20 00:15:28.839 lat (msec) : 20=0.14%, 50=0.33%, 100=99.44%, 250=0.08% 00:15:28.839 cpu : usr=1.42%, sys=2.01%, ctx=14444, majf=0, minf=1 00:15:28.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:28.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.839 issued rwts: total=0,11099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.839 job9: (groupid=0, jobs=1): err= 0: pid=87507: Thu Jul 25 01:58:42 2024 00:15:28.839 write: IOPS=1103, BW=276MiB/s (289MB/s)(2773MiB/10052msec); 0 zone resets 00:15:28.839 slat (usec): min=16, max=54359, avg=884.81, stdev=1646.48 00:15:28.839 clat (msec): min=3, max=179, avg=57.08, stdev=15.94 00:15:28.839 lat (msec): min=4, max=179, avg=57.96, stdev=16.10 00:15:28.839 clat percentiles (msec): 00:15:28.839 | 1.00th=[ 39], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 53], 00:15:28.839 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 56], 00:15:28.839 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 57], 95.00th=[ 58], 00:15:28.839 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 171], 00:15:28.839 | 99.99th=[ 180] 00:15:28.839 bw ( KiB/s): min=96256, max=299520, per=20.18%, avg=282338.25, stdev=44988.79, samples=20 00:15:28.839 iops : min= 376, max= 1170, avg=1102.85, stdev=175.73, samples=20 00:15:28.839 lat (msec) : 4=0.01%, 10=0.07%, 20=0.19%, 50=1.24%, 100=95.93% 00:15:28.839 lat (msec) : 250=2.57% 00:15:28.839 cpu : usr=1.34%, sys=2.01%, ctx=14862, majf=0, minf=1 00:15:28.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:28.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.839 issued rwts: total=0,11093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.839 job10: (groupid=0, jobs=1): err= 0: pid=87508: Thu Jul 25 01:58:42 2024 00:15:28.839 write: IOPS=327, BW=81.8MiB/s (85.7MB/s)(832MiB/10174msec); 0 zone resets 00:15:28.839 slat (usec): min=19, max=28755, avg=3000.89, stdev=5190.44 00:15:28.839 clat (msec): min=16, max=360, avg=192.57, stdev=21.09 00:15:28.839 lat (msec): min=16, max=360, avg=195.57, stdev=20.76 00:15:28.839 clat percentiles (msec): 00:15:28.839 | 1.00th=[ 87], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:15:28.839 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 197], 00:15:28.839 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 199], 95.00th=[ 201], 00:15:28.839 | 99.00th=[ 264], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 359], 00:15:28.839 | 99.99th=[ 359] 00:15:28.839 bw ( KiB/s): min=81756, max=85504, per=5.97%, avg=83567.40, stdev=952.41, samples=20 00:15:28.839 iops : min= 319, max= 334, avg=326.35, stdev= 3.73, samples=20 00:15:28.839 lat (msec) : 20=0.09%, 50=0.48%, 100=0.60%, 250=97.69%, 500=1.14% 00:15:28.839 cpu : usr=0.60%, sys=1.08%, ctx=4045, majf=0, minf=1 00:15:28.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:28.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:28.839 issued rwts: total=0,3328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.839 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.839 00:15:28.839 Run status group 0 (all jobs): 00:15:28.839 WRITE: bw=1366MiB/s (1433MB/s), 80.9MiB/s-276MiB/s (84.8MB/s-289MB/s), io=13.6GiB (14.6GB), run=10051-10178msec 00:15:28.839 00:15:28.839 Disk stats (read/write): 00:15:28.839 nvme0n1: ios=49/6437, merge=0/0, ticks=61/1209098, in_queue=1209159, util=97.72% 00:15:28.839 nvme10n1: ios=49/9039, merge=0/0, ticks=44/1210357, in_queue=1210401, util=97.77% 00:15:28.839 nvme1n1: ios=40/7983, merge=0/0, ticks=41/1211051, in_queue=1211092, util=97.86% 00:15:28.839 nvme2n1: ios=31/8058, merge=0/0, ticks=25/1212200, in_queue=1212225, util=98.14% 00:15:28.839 nvme3n1: ios=21/8053, merge=0/0, ticks=41/1212293, in_queue=1212334, util=98.11% 00:15:28.839 nvme4n1: ios=0/6543, merge=0/0, ticks=0/1212333, in_queue=1212333, util=98.34% 00:15:28.839 nvme5n1: ios=0/6489, merge=0/0, ticks=0/1210221, in_queue=1210221, util=98.31% 00:15:28.839 nvme6n1: ios=0/6511, merge=0/0, ticks=0/1210480, in_queue=1210480, util=98.45% 00:15:28.839 nvme7n1: ios=0/22058, merge=0/0, ticks=0/1216972, in_queue=1216972, util=98.72% 00:15:28.839 nvme8n1: ios=0/22011, merge=0/0, ticks=0/1216246, in_queue=1216246, util=98.65% 00:15:28.839 nvme9n1: ios=0/6521, merge=0/0, ticks=0/1210312, in_queue=1210312, util=98.87% 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.839 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:15:28.839 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:15:28.839 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:15:28.839 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.839 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:15:28.840 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:15:28.840 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:28.840 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.841 rmmod nvme_tcp 00:15:28.841 rmmod nvme_fabrics 00:15:28.841 rmmod nvme_keyring 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 86821 ']' 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 86821 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 86821 ']' 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 86821 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86821 00:15:28.841 killing process with pid 86821 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86821' 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 86821 00:15:28.841 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 86821 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.100 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:29.360 00:15:29.360 real 0m48.143s 00:15:29.360 user 2m35.225s 00:15:29.360 sys 0m36.065s 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:29.360 ************************************ 00:15:29.360 END TEST nvmf_multiconnection 00:15:29.360 ************************************ 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.360 ************************************ 00:15:29.360 START TEST nvmf_initiator_timeout 00:15:29.360 ************************************ 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:29.360 * Looking for test storage... 00:15:29.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.360 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:29.361 Cannot find device "nvmf_tgt_br" 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.361 Cannot find device "nvmf_tgt_br2" 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:29.361 Cannot find device "nvmf_tgt_br" 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:15:29.361 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:29.620 Cannot find device "nvmf_tgt_br2" 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.620 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.621 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:29.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:29.880 00:15:29.880 --- 10.0.0.2 ping statistics --- 00:15:29.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.880 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:29.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:29.880 00:15:29.880 --- 10.0.0.3 ping statistics --- 00:15:29.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.880 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:29.880 00:15:29.880 --- 10.0.0.1 ping statistics --- 00:15:29.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.880 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.880 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=87875 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 87875 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 87875 ']' 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.880 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:29.880 [2024-07-25 01:58:45.061269] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:15:29.880 [2024-07-25 01:58:45.062051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.140 [2024-07-25 01:58:45.196641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:30.140 [2024-07-25 01:58:45.208192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.140 [2024-07-25 01:58:45.244320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.140 [2024-07-25 01:58:45.244640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.140 [2024-07-25 01:58:45.244797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.140 [2024-07-25 01:58:45.244973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.140 [2024-07-25 01:58:45.245018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.140 [2024-07-25 01:58:45.245209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.140 [2024-07-25 01:58:45.245392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.140 [2024-07-25 01:58:45.245472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.140 [2024-07-25 01:58:45.245472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.140 [2024-07-25 01:58:45.275778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:30.765 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.765 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.766 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:31.024 Malloc0 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:31.025 Delay0 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:31.025 [2024-07-25 01:58:46.083194] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:31.025 [2024-07-25 01:58:46.115359] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d --hostid=6f42f786-7175-4746-b686-8365485f4d3d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:31.025 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87938 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:15:33.558 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:15:33.558 [global] 00:15:33.558 thread=1 00:15:33.558 invalidate=1 00:15:33.558 rw=write 00:15:33.558 time_based=1 00:15:33.558 runtime=60 00:15:33.558 ioengine=libaio 00:15:33.558 direct=1 00:15:33.558 bs=4096 00:15:33.558 iodepth=1 00:15:33.558 norandommap=0 00:15:33.558 numjobs=1 00:15:33.558 00:15:33.558 verify_dump=1 00:15:33.558 verify_backlog=512 00:15:33.558 verify_state_save=0 00:15:33.558 do_verify=1 00:15:33.558 verify=crc32c-intel 00:15:33.558 [job0] 00:15:33.558 filename=/dev/nvme0n1 00:15:33.558 Could not set queue depth (nvme0n1) 00:15:33.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.558 fio-3.35 00:15:33.558 Starting 1 thread 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:36.092 true 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:36.092 true 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:36.092 true 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:36.092 true 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.092 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.386 true 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.386 true 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.386 true 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.386 true 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:15:39.386 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87938 00:16:35.657 00:16:35.657 job0: (groupid=0, jobs=1): err= 0: pid=87960: Thu Jul 25 01:59:48 2024 00:16:35.657 read: IOPS=840, BW=3362KiB/s (3443kB/s)(197MiB/60000msec) 00:16:35.657 slat (usec): min=10, max=9855, avg=14.19, stdev=59.96 00:16:35.657 clat (usec): min=2, max=40594k, avg=999.41, stdev=180751.53 00:16:35.657 lat (usec): min=163, max=40594k, avg=1013.60, stdev=180751.53 00:16:35.657 clat percentiles (usec): 00:16:35.657 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:16:35.657 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:16:35.657 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 233], 00:16:35.657 | 99.00th=[ 258], 99.50th=[ 285], 99.90th=[ 461], 99.95th=[ 545], 00:16:35.657 | 99.99th=[ 857] 00:16:35.657 write: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec); 0 zone resets 00:16:35.657 slat (usec): min=13, max=639, avg=20.47, stdev= 6.51 00:16:35.657 clat (usec): min=116, max=2724, avg=150.94, stdev=26.48 00:16:35.657 lat (usec): min=134, max=2754, avg=171.42, stdev=27.71 00:16:35.657 clat percentiles (usec): 00:16:35.657 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:16:35.657 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:16:35.657 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 184], 00:16:35.657 | 99.00th=[ 206], 99.50th=[ 223], 99.90th=[ 388], 99.95th=[ 486], 00:16:35.657 | 99.99th=[ 832] 00:16:35.657 bw ( KiB/s): min= 2432, max=12288, per=100.00%, avg=10187.49, stdev=1976.75, samples=39 00:16:35.657 iops : min= 608, max= 3072, avg=2546.87, stdev=494.19, samples=39 00:16:35.657 lat (usec) : 4=0.01%, 250=99.07%, 500=0.87%, 750=0.04%, 1000=0.01% 00:16:35.657 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:16:35.657 cpu : usr=0.64%, sys=2.22%, ctx=101137, majf=0, minf=2 00:16:35.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.657 issued rwts: total=50437,50688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.657 00:16:35.657 Run status group 0 (all jobs): 00:16:35.657 READ: bw=3362KiB/s (3443kB/s), 3362KiB/s-3362KiB/s (3443kB/s-3443kB/s), io=197MiB (207MB), run=60000-60000msec 00:16:35.657 WRITE: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:16:35.657 00:16:35.657 Disk stats (read/write): 00:16:35.657 nvme0n1: ios=50514/50394, merge=0/0, ticks=10139/8076, in_queue=18215, util=99.58% 00:16:35.657 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.657 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.658 nvmf hotplug test: fio successful as expected 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.658 rmmod nvme_tcp 00:16:35.658 rmmod nvme_fabrics 00:16:35.658 rmmod nvme_keyring 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 87875 ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 87875 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 87875 ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 87875 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87875 00:16:35.658 killing process with pid 87875 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87875' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 87875 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 87875 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.658 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.658 00:16:35.658 real 1m4.535s 00:16:35.658 user 3m52.872s 00:16:35.658 sys 0m22.121s 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.658 ************************************ 00:16:35.658 END TEST nvmf_initiator_timeout 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.658 ************************************ 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:16:35.658 00:16:35.658 real 6m10.250s 00:16:35.658 user 15m26.557s 00:16:35.658 sys 1m56.004s 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.658 ************************************ 00:16:35.658 END TEST nvmf_target_extra 00:16:35.658 ************************************ 00:16:35.658 01:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.658 01:59:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:35.658 01:59:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.658 01:59:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.658 01:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:35.658 ************************************ 00:16:35.658 START TEST nvmf_host 00:16:35.658 ************************************ 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:35.658 * Looking for test storage... 00:16:35.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.658 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.659 ************************************ 00:16:35.659 START TEST nvmf_identify 00:16:35.659 ************************************ 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:35.659 * Looking for test storage... 00:16:35.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:35.659 Cannot find device "nvmf_tgt_br" 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.659 Cannot find device "nvmf_tgt_br2" 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:35.659 Cannot find device "nvmf_tgt_br" 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:35.659 Cannot find device "nvmf_tgt_br2" 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:35.659 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:35.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:35.660 00:16:35.660 --- 10.0.0.2 ping statistics --- 00:16:35.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.660 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:35.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:35.660 00:16:35.660 --- 10.0.0.3 ping statistics --- 00:16:35.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.660 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:35.660 00:16:35.660 --- 10.0.0.1 ping statistics --- 00:16:35.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.660 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88823 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88823 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 88823 ']' 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.660 01:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 [2024-07-25 01:59:49.755833] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:35.660 [2024-07-25 01:59:49.755955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.660 [2024-07-25 01:59:49.880322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:35.660 [2024-07-25 01:59:49.896825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.660 [2024-07-25 01:59:49.930646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.660 [2024-07-25 01:59:49.930712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.660 [2024-07-25 01:59:49.930738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.660 [2024-07-25 01:59:49.930745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.660 [2024-07-25 01:59:49.930751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.660 [2024-07-25 01:59:49.931088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.660 [2024-07-25 01:59:49.931181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.660 [2024-07-25 01:59:49.931339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.660 [2024-07-25 01:59:49.931348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.660 [2024-07-25 01:59:49.960172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 [2024-07-25 01:59:50.043752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 Malloc0 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 [2024-07-25 01:59:50.134955] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.660 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:35.661 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.661 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.661 [ 00:16:35.661 { 00:16:35.661 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.661 "subtype": "Discovery", 00:16:35.661 "listen_addresses": [ 00:16:35.661 { 00:16:35.661 "trtype": "TCP", 00:16:35.661 "adrfam": "IPv4", 00:16:35.661 "traddr": "10.0.0.2", 00:16:35.661 "trsvcid": "4420" 00:16:35.661 } 00:16:35.661 ], 00:16:35.661 "allow_any_host": true, 00:16:35.661 "hosts": [] 00:16:35.661 }, 00:16:35.661 { 00:16:35.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.661 "subtype": "NVMe", 00:16:35.661 "listen_addresses": [ 00:16:35.661 { 00:16:35.661 "trtype": "TCP", 00:16:35.661 "adrfam": "IPv4", 00:16:35.661 "traddr": "10.0.0.2", 00:16:35.661 "trsvcid": "4420" 00:16:35.661 } 00:16:35.661 ], 00:16:35.661 "allow_any_host": true, 00:16:35.661 "hosts": [], 00:16:35.661 "serial_number": "SPDK00000000000001", 00:16:35.661 "model_number": "SPDK bdev Controller", 00:16:35.661 "max_namespaces": 32, 00:16:35.661 "min_cntlid": 1, 00:16:35.661 "max_cntlid": 65519, 00:16:35.661 "namespaces": [ 00:16:35.661 { 00:16:35.661 "nsid": 1, 00:16:35.661 "bdev_name": "Malloc0", 00:16:35.661 "name": "Malloc0", 00:16:35.661 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:35.661 "eui64": "ABCDEF0123456789", 00:16:35.661 "uuid": "2dd4bcb3-bc63-428b-9b08-cbc860ce26d8" 00:16:35.661 } 00:16:35.661 ] 00:16:35.661 } 00:16:35.661 ] 00:16:35.661 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.661 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:35.661 [2024-07-25 01:59:50.189296] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:35.661 [2024-07-25 01:59:50.189354] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88851 ] 00:16:35.661 [2024-07-25 01:59:50.309345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:35.661 [2024-07-25 01:59:50.327989] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:35.661 [2024-07-25 01:59:50.328065] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:35.661 [2024-07-25 01:59:50.328072] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:35.661 [2024-07-25 01:59:50.328084] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:35.661 [2024-07-25 01:59:50.328093] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:35.661 [2024-07-25 01:59:50.328229] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:35.661 [2024-07-25 01:59:50.328290] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe3ba90 0 00:16:35.661 [2024-07-25 01:59:50.340917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:35.661 [2024-07-25 01:59:50.340940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:35.661 [2024-07-25 01:59:50.340963] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:35.661 [2024-07-25 01:59:50.340967] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:35.661 [2024-07-25 01:59:50.341009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.341016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.341020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.661 [2024-07-25 01:59:50.341053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:35.661 [2024-07-25 01:59:50.341085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.661 [2024-07-25 01:59:50.348917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 01:59:50.348938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 01:59:50.348960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.348966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.661 [2024-07-25 01:59:50.348977] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:35.661 [2024-07-25 01:59:50.348985] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:35.661 [2024-07-25 01:59:50.348991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:35.661 [2024-07-25 01:59:50.349008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.661 [2024-07-25 01:59:50.349028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 01:59:50.349057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.661 [2024-07-25 01:59:50.349110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 01:59:50.349116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 01:59:50.349135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.661 [2024-07-25 01:59:50.349145] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:35.661 [2024-07-25 01:59:50.349169] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:35.661 [2024-07-25 01:59:50.349180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.661 [2024-07-25 01:59:50.349205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 01:59:50.349234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.661 [2024-07-25 01:59:50.349282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 01:59:50.349293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 01:59:50.349298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.661 [2024-07-25 01:59:50.349314] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:35.661 [2024-07-25 01:59:50.349328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.661 [2024-07-25 01:59:50.349340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.661 [2024-07-25 01:59:50.349357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 01:59:50.349381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.661 [2024-07-25 01:59:50.349430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 01:59:50.349442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 01:59:50.349449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.661 [2024-07-25 01:59:50.349466] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.661 [2024-07-25 01:59:50.349485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.661 [2024-07-25 01:59:50.349503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 01:59:50.349531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.661 [2024-07-25 01:59:50.349579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.661 [2024-07-25 01:59:50.349590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.661 [2024-07-25 01:59:50.349597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.661 [2024-07-25 01:59:50.349612] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:35.661 [2024-07-25 01:59:50.349619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:35.661 [2024-07-25 01:59:50.349628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.661 [2024-07-25 01:59:50.349734] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:35.661 [2024-07-25 01:59:50.349746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.661 [2024-07-25 01:59:50.349760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.661 [2024-07-25 01:59:50.349769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.661 [2024-07-25 01:59:50.349776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.661 [2024-07-25 01:59:50.349801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.661 [2024-07-25 01:59:50.349865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 01:59:50.349879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 01:59:50.349902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 01:59:50.349909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.662 [2024-07-25 01:59:50.349915] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.662 [2024-07-25 01:59:50.349927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.662 [2024-07-25 01:59:50.349932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 01:59:50.349936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.662 [2024-07-25 01:59:50.349944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.662 [2024-07-25 01:59:50.349971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.662 [2024-07-25 01:59:50.350020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.662 [2024-07-25 01:59:50.350033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.662 [2024-07-25 01:59:50.350039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 01:59:50.350046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.662 [2024-07-25 01:59:50.350054] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.662 [2024-07-25 01:59:50.350061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:35.662 [2024-07-25 01:59:50.350071] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:35.662 [2024-07-25 01:59:50.350082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.662 [2024-07-25 01:59:50.350096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.662 [2024-07-25 01:59:50.350104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.662 [2024-07-25 01:59:50.350115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.662 [2024-07-25 01:59:50.350137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.662 [2024-07-25 01:59:50.350225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.662 [2024-07-25 01:59:50.350238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.662 [2024-07-25 01:59:50.350242] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.662 [2024-07-25 01:59:50.350246] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe3ba90): datao=0, datal=4096, cccid=0 00:16:35.662 [2024-07-25 01:59:50.350251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe828c0) on tqpair(0xe3ba90): expected_datao=0, payload_size=4096 00:16:35.663 [2024-07-25 01:59:50.350256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350264] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 01:59:50.350285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 01:59:50.350291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.663 [2024-07-25 01:59:50.350310] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:35.663 [2024-07-25 01:59:50.350319] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:35.663 [2024-07-25 01:59:50.350327] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:35.663 [2024-07-25 01:59:50.350338] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:35.663 [2024-07-25 01:59:50.350344] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:35.663 [2024-07-25 01:59:50.350349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:35.663 [2024-07-25 01:59:50.350359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.663 [2024-07-25 01:59:50.350370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.663 [2024-07-25 01:59:50.350414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.663 [2024-07-25 01:59:50.350468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 01:59:50.350478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 01:59:50.350482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.663 [2024-07-25 01:59:50.350494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.663 [2024-07-25 01:59:50.350524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.663 [2024-07-25 01:59:50.350556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.663 [2024-07-25 01:59:50.350577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.663 [2024-07-25 01:59:50.350608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.663 [2024-07-25 01:59:50.350621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.663 [2024-07-25 01:59:50.350630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 01:59:50.350672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe828c0, cid 0, qid 0 00:16:35.663 [2024-07-25 01:59:50.350680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82a40, cid 1, qid 0 00:16:35.663 [2024-07-25 01:59:50.350685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82bc0, cid 2, qid 0 00:16:35.663 [2024-07-25 01:59:50.350690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.663 [2024-07-25 01:59:50.350696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82ec0, cid 4, qid 0 00:16:35.663 [2024-07-25 01:59:50.350775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.663 [2024-07-25 01:59:50.350790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.663 [2024-07-25 01:59:50.350796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82ec0) on tqpair=0xe3ba90 00:16:35.663 [2024-07-25 01:59:50.350810] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:35.663 [2024-07-25 01:59:50.350817] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:35.663 [2024-07-25 01:59:50.350835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.663 [2024-07-25 01:59:50.350859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe3ba90) 00:16:35.663 [2024-07-25 01:59:50.350870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.663 [2024-07-25 01:59:50.350903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82ec0, cid 4, qid 0 00:16:35.663 [2024-07-25 01:59:50.350959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.663 [2024-07-25 01:59:50.350971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.664 [2024-07-25 01:59:50.350976] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.350980] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe3ba90): datao=0, datal=4096, cccid=4 00:16:35.664 [2024-07-25 01:59:50.350984] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82ec0) on tqpair(0xe3ba90): expected_datao=0, payload_size=4096 00:16:35.664 [2024-07-25 01:59:50.350989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.350997] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351001] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 01:59:50.351025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 01:59:50.351032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82ec0) on tqpair=0xe3ba90 00:16:35.664 [2024-07-25 01:59:50.351056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:35.664 [2024-07-25 01:59:50.351085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe3ba90) 00:16:35.664 [2024-07-25 01:59:50.351105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.664 [2024-07-25 01:59:50.351117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe3ba90) 00:16:35.664 [2024-07-25 01:59:50.351139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.664 [2024-07-25 01:59:50.351170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82ec0, cid 4, qid 0 00:16:35.664 [2024-07-25 01:59:50.351183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe83040, cid 5, qid 0 00:16:35.664 [2024-07-25 01:59:50.351276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.664 [2024-07-25 01:59:50.351299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.664 [2024-07-25 01:59:50.351305] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351310] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe3ba90): datao=0, datal=1024, cccid=4 00:16:35.664 [2024-07-25 01:59:50.351318] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82ec0) on tqpair(0xe3ba90): expected_datao=0, payload_size=1024 00:16:35.664 [2024-07-25 01:59:50.351325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351343] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 01:59:50.351358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 01:59:50.351362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe83040) on tqpair=0xe3ba90 00:16:35.664 [2024-07-25 01:59:50.351390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 01:59:50.351401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 01:59:50.351407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82ec0) on tqpair=0xe3ba90 00:16:35.664 [2024-07-25 01:59:50.351441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe3ba90) 00:16:35.664 [2024-07-25 01:59:50.351457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.664 [2024-07-25 01:59:50.351489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82ec0, cid 4, qid 0 00:16:35.664 [2024-07-25 01:59:50.351551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.664 [2024-07-25 01:59:50.351560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.664 [2024-07-25 01:59:50.351564] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351568] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe3ba90): datao=0, datal=3072, cccid=4 00:16:35.664 [2024-07-25 01:59:50.351575] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82ec0) on tqpair(0xe3ba90): expected_datao=0, payload_size=3072 00:16:35.664 [2024-07-25 01:59:50.351583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351590] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351594] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 01:59:50.351612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 01:59:50.351618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82ec0) on tqpair=0xe3ba90 00:16:35.664 [2024-07-25 01:59:50.351639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe3ba90) 00:16:35.664 [2024-07-25 01:59:50.351659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.664 [2024-07-25 01:59:50.351691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82ec0, cid 4, qid 0 00:16:35.664 [2024-07-25 01:59:50.351786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.664 [2024-07-25 01:59:50.351801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.664 [2024-07-25 01:59:50.351808] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351815] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe3ba90): datao=0, datal=8, cccid=4 00:16:35.664 [2024-07-25 01:59:50.351822] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe82ec0) on tqpair(0xe3ba90): expected_datao=0, payload_size=8 00:16:35.664 [2024-07-25 01:59:50.351827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351835] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351840] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.664 [2024-07-25 01:59:50.351898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.664 [2024-07-25 01:59:50.351905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.664 [2024-07-25 01:59:50.351912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82ec0) on tqpair=0xe3ba90 00:16:35.664 ===================================================== 00:16:35.664 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:35.664 ===================================================== 00:16:35.664 Controller Capabilities/Features 00:16:35.664 ================================ 00:16:35.664 Vendor ID: 0000 00:16:35.664 Subsystem Vendor ID: 0000 00:16:35.664 Serial Number: .................... 00:16:35.664 Model Number: ........................................ 00:16:35.664 Firmware Version: 24.09 00:16:35.664 Recommended Arb Burst: 0 00:16:35.664 IEEE OUI Identifier: 00 00 00 00:16:35.664 Multi-path I/O 00:16:35.664 May have multiple subsystem ports: No 00:16:35.664 May have multiple controllers: No 00:16:35.664 Associated with SR-IOV VF: No 00:16:35.664 Max Data Transfer Size: 131072 00:16:35.664 Max Number of Namespaces: 0 00:16:35.664 Max Number of I/O Queues: 1024 00:16:35.664 NVMe Specification Version (VS): 1.3 00:16:35.664 NVMe Specification Version (Identify): 1.3 00:16:35.664 Maximum Queue Entries: 128 00:16:35.664 Contiguous Queues Required: Yes 00:16:35.664 Arbitration Mechanisms Supported 00:16:35.664 Weighted Round Robin: Not Supported 00:16:35.664 Vendor Specific: Not Supported 00:16:35.664 Reset Timeout: 15000 ms 00:16:35.664 Doorbell Stride: 4 bytes 00:16:35.664 NVM Subsystem Reset: Not Supported 00:16:35.664 Command Sets Supported 00:16:35.664 NVM Command Set: Supported 00:16:35.664 Boot Partition: Not Supported 00:16:35.664 Memory Page Size Minimum: 4096 bytes 00:16:35.664 Memory Page Size Maximum: 4096 bytes 00:16:35.664 Persistent Memory Region: Not Supported 00:16:35.664 Optional Asynchronous Events Supported 00:16:35.664 Namespace Attribute Notices: Not Supported 00:16:35.664 Firmware Activation Notices: Not Supported 00:16:35.664 ANA Change Notices: Not Supported 00:16:35.664 PLE Aggregate Log Change Notices: Not Supported 00:16:35.664 LBA Status Info Alert Notices: Not Supported 00:16:35.664 EGE Aggregate Log Change Notices: Not Supported 00:16:35.664 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.664 Zone Descriptor Change Notices: Not Supported 00:16:35.664 Discovery Log Change Notices: Supported 00:16:35.664 Controller Attributes 00:16:35.664 128-bit Host Identifier: Not Supported 00:16:35.664 Non-Operational Permissive Mode: Not Supported 00:16:35.664 NVM Sets: Not Supported 00:16:35.664 Read Recovery Levels: Not Supported 00:16:35.664 Endurance Groups: Not Supported 00:16:35.664 Predictable Latency Mode: Not Supported 00:16:35.664 Traffic Based Keep ALive: Not Supported 00:16:35.664 Namespace Granularity: Not Supported 00:16:35.664 SQ Associations: Not Supported 00:16:35.664 UUID List: Not Supported 00:16:35.664 Multi-Domain Subsystem: Not Supported 00:16:35.664 Fixed Capacity Management: Not Supported 00:16:35.664 Variable Capacity Management: Not Supported 00:16:35.664 Delete Endurance Group: Not Supported 00:16:35.664 Delete NVM Set: Not Supported 00:16:35.664 Extended LBA Formats Supported: Not Supported 00:16:35.664 Flexible Data Placement Supported: Not Supported 00:16:35.664 00:16:35.664 Controller Memory Buffer Support 00:16:35.664 ================================ 00:16:35.665 Supported: No 00:16:35.665 00:16:35.665 Persistent Memory Region Support 00:16:35.665 ================================ 00:16:35.665 Supported: No 00:16:35.665 00:16:35.665 Admin Command Set Attributes 00:16:35.665 ============================ 00:16:35.665 Security Send/Receive: Not Supported 00:16:35.665 Format NVM: Not Supported 00:16:35.665 Firmware Activate/Download: Not Supported 00:16:35.665 Namespace Management: Not Supported 00:16:35.665 Device Self-Test: Not Supported 00:16:35.665 Directives: Not Supported 00:16:35.665 NVMe-MI: Not Supported 00:16:35.665 Virtualization Management: Not Supported 00:16:35.665 Doorbell Buffer Config: Not Supported 00:16:35.665 Get LBA Status Capability: Not Supported 00:16:35.665 Command & Feature Lockdown Capability: Not Supported 00:16:35.665 Abort Command Limit: 1 00:16:35.665 Async Event Request Limit: 4 00:16:35.665 Number of Firmware Slots: N/A 00:16:35.665 Firmware Slot 1 Read-Only: N/A 00:16:35.665 Firmware Activation Without Reset: N/A 00:16:35.665 Multiple Update Detection Support: N/A 00:16:35.665 Firmware Update Granularity: No Information Provided 00:16:35.665 Per-Namespace SMART Log: No 00:16:35.665 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.665 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:35.665 Command Effects Log Page: Not Supported 00:16:35.665 Get Log Page Extended Data: Supported 00:16:35.665 Telemetry Log Pages: Not Supported 00:16:35.665 Persistent Event Log Pages: Not Supported 00:16:35.665 Supported Log Pages Log Page: May Support 00:16:35.665 Commands Supported & Effects Log Page: Not Supported 00:16:35.665 Feature Identifiers & Effects Log Page:May Support 00:16:35.665 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.665 Data Area 4 for Telemetry Log: Not Supported 00:16:35.665 Error Log Page Entries Supported: 128 00:16:35.665 Keep Alive: Not Supported 00:16:35.665 00:16:35.665 NVM Command Set Attributes 00:16:35.665 ========================== 00:16:35.665 Submission Queue Entry Size 00:16:35.665 Max: 1 00:16:35.665 Min: 1 00:16:35.665 Completion Queue Entry Size 00:16:35.665 Max: 1 00:16:35.665 Min: 1 00:16:35.665 Number of Namespaces: 0 00:16:35.665 Compare Command: Not Supported 00:16:35.665 Write Uncorrectable Command: Not Supported 00:16:35.665 Dataset Management Command: Not Supported 00:16:35.665 Write Zeroes Command: Not Supported 00:16:35.665 Set Features Save Field: Not Supported 00:16:35.665 Reservations: Not Supported 00:16:35.665 Timestamp: Not Supported 00:16:35.665 Copy: Not Supported 00:16:35.665 Volatile Write Cache: Not Present 00:16:35.665 Atomic Write Unit (Normal): 1 00:16:35.665 Atomic Write Unit (PFail): 1 00:16:35.665 Atomic Compare & Write Unit: 1 00:16:35.665 Fused Compare & Write: Supported 00:16:35.665 Scatter-Gather List 00:16:35.665 SGL Command Set: Supported 00:16:35.665 SGL Keyed: Supported 00:16:35.665 SGL Bit Bucket Descriptor: Not Supported 00:16:35.665 SGL Metadata Pointer: Not Supported 00:16:35.665 Oversized SGL: Not Supported 00:16:35.665 SGL Metadata Address: Not Supported 00:16:35.665 SGL Offset: Supported 00:16:35.665 Transport SGL Data Block: Not Supported 00:16:35.665 Replay Protected Memory Block: Not Supported 00:16:35.665 00:16:35.665 Firmware Slot Information 00:16:35.665 ========================= 00:16:35.665 Active slot: 0 00:16:35.665 00:16:35.665 00:16:35.665 Error Log 00:16:35.665 ========= 00:16:35.665 00:16:35.665 Active Namespaces 00:16:35.665 ================= 00:16:35.665 Discovery Log Page 00:16:35.665 ================== 00:16:35.665 Generation Counter: 2 00:16:35.665 Number of Records: 2 00:16:35.665 Record Format: 0 00:16:35.665 00:16:35.665 Discovery Log Entry 0 00:16:35.665 ---------------------- 00:16:35.665 Transport Type: 3 (TCP) 00:16:35.665 Address Family: 1 (IPv4) 00:16:35.665 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:35.665 Entry Flags: 00:16:35.665 Duplicate Returned Information: 1 00:16:35.665 Explicit Persistent Connection Support for Discovery: 1 00:16:35.665 Transport Requirements: 00:16:35.665 Secure Channel: Not Required 00:16:35.665 Port ID: 0 (0x0000) 00:16:35.665 Controller ID: 65535 (0xffff) 00:16:35.665 Admin Max SQ Size: 128 00:16:35.665 Transport Service Identifier: 4420 00:16:35.665 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:35.665 Transport Address: 10.0.0.2 00:16:35.665 Discovery Log Entry 1 00:16:35.665 ---------------------- 00:16:35.665 Transport Type: 3 (TCP) 00:16:35.665 Address Family: 1 (IPv4) 00:16:35.665 Subsystem Type: 2 (NVM Subsystem) 00:16:35.665 Entry Flags: 00:16:35.665 Duplicate Returned Information: 0 00:16:35.665 Explicit Persistent Connection Support for Discovery: 0 00:16:35.665 Transport Requirements: 00:16:35.665 Secure Channel: Not Required 00:16:35.665 Port ID: 0 (0x0000) 00:16:35.665 Controller ID: 65535 (0xffff) 00:16:35.665 Admin Max SQ Size: 128 00:16:35.665 Transport Service Identifier: 4420 00:16:35.665 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:35.665 Transport Address: 10.0.0.2 [2024-07-25 01:59:50.352030] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:35.665 [2024-07-25 01:59:50.352051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe828c0) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 01:59:50.352072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82a40) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 01:59:50.352104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82bc0) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 01:59:50.352118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.665 [2024-07-25 01:59:50.352133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.665 [2024-07-25 01:59:50.352154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 01:59:50.352180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.665 [2024-07-25 01:59:50.352242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 01:59:50.352254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 01:59:50.352260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.665 [2024-07-25 01:59:50.352307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 01:59:50.352335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.665 [2024-07-25 01:59:50.352399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 01:59:50.352410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 01:59:50.352414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352424] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:35.665 [2024-07-25 01:59:50.352429] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:35.665 [2024-07-25 01:59:50.352441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.665 [2024-07-25 01:59:50.352466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.665 [2024-07-25 01:59:50.352489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.665 [2024-07-25 01:59:50.352532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.665 [2024-07-25 01:59:50.352543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.665 [2024-07-25 01:59:50.352550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.665 [2024-07-25 01:59:50.352554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.665 [2024-07-25 01:59:50.352566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.666 [2024-07-25 01:59:50.352587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.352616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.666 [2024-07-25 01:59:50.352659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.352670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.352674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.666 [2024-07-25 01:59:50.352689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.666 [2024-07-25 01:59:50.352712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.352742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.666 [2024-07-25 01:59:50.352799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.352810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.352815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.666 [2024-07-25 01:59:50.352839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.352854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.666 [2024-07-25 01:59:50.352866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.356946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.666 [2024-07-25 01:59:50.356991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.357000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.357004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.357009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.666 [2024-07-25 01:59:50.357024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.357030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.357034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe3ba90) 00:16:35.666 [2024-07-25 01:59:50.357043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.357071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe82d40, cid 3, qid 0 00:16:35.666 [2024-07-25 01:59:50.357123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.357130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.357134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.357138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe82d40) on tqpair=0xe3ba90 00:16:35.666 [2024-07-25 01:59:50.357147] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:16:35.666 00:16:35.666 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:35.666 [2024-07-25 01:59:50.401094] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:35.666 [2024-07-25 01:59:50.401181] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88853 ] 00:16:35.666 [2024-07-25 01:59:50.521242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:35.666 [2024-07-25 01:59:50.539538] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:35.666 [2024-07-25 01:59:50.539623] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:35.666 [2024-07-25 01:59:50.539630] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:35.666 [2024-07-25 01:59:50.539641] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:35.666 [2024-07-25 01:59:50.539650] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:35.666 [2024-07-25 01:59:50.539791] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:35.666 [2024-07-25 01:59:50.539841] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x24ada90 0 00:16:35.666 [2024-07-25 01:59:50.544919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:35.666 [2024-07-25 01:59:50.544940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:35.666 [2024-07-25 01:59:50.544962] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:35.666 [2024-07-25 01:59:50.544966] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:35.666 [2024-07-25 01:59:50.545005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.545012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.545016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.666 [2024-07-25 01:59:50.545029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:35.666 [2024-07-25 01:59:50.545058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.666 [2024-07-25 01:59:50.552941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.552964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.552970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.552975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.666 [2024-07-25 01:59:50.552985] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:35.666 [2024-07-25 01:59:50.552993] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:35.666 [2024-07-25 01:59:50.553000] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:35.666 [2024-07-25 01:59:50.553017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.666 [2024-07-25 01:59:50.553037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.553065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.666 [2024-07-25 01:59:50.553124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.553146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.553152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.666 [2024-07-25 01:59:50.553167] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:35.666 [2024-07-25 01:59:50.553179] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:35.666 [2024-07-25 01:59:50.553206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.666 [2024-07-25 01:59:50.553227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.553256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.666 [2024-07-25 01:59:50.553304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.553315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.553322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.666 [2024-07-25 01:59:50.553338] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:35.666 [2024-07-25 01:59:50.553351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:35.666 [2024-07-25 01:59:50.553360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.666 [2024-07-25 01:59:50.553375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.666 [2024-07-25 01:59:50.553400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.666 [2024-07-25 01:59:50.553445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.666 [2024-07-25 01:59:50.553457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.666 [2024-07-25 01:59:50.553461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.666 [2024-07-25 01:59:50.553471] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:35.666 [2024-07-25 01:59:50.553483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.666 [2024-07-25 01:59:50.553498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.666 [2024-07-25 01:59:50.553509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.667 [2024-07-25 01:59:50.553535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.667 [2024-07-25 01:59:50.553579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.667 [2024-07-25 01:59:50.553591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.667 [2024-07-25 01:59:50.553597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.553604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.667 [2024-07-25 01:59:50.553612] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:35.667 [2024-07-25 01:59:50.553619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:35.667 [2024-07-25 01:59:50.553632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:35.667 [2024-07-25 01:59:50.553738] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:35.667 [2024-07-25 01:59:50.553745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:35.667 [2024-07-25 01:59:50.553772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.553779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.553786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.553799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.667 [2024-07-25 01:59:50.553827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.667 [2024-07-25 01:59:50.553881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.667 [2024-07-25 01:59:50.553916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.667 [2024-07-25 01:59:50.553922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.553929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.667 [2024-07-25 01:59:50.553938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:35.667 [2024-07-25 01:59:50.553955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.553964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.553970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.553978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.667 [2024-07-25 01:59:50.554001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.667 [2024-07-25 01:59:50.554049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.667 [2024-07-25 01:59:50.554060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.667 [2024-07-25 01:59:50.554066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.667 [2024-07-25 01:59:50.554082] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:35.667 [2024-07-25 01:59:50.554091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:35.667 [2024-07-25 01:59:50.554104] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:35.667 [2024-07-25 01:59:50.554116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:35.667 [2024-07-25 01:59:50.554127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.667 [2024-07-25 01:59:50.554183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.667 [2024-07-25 01:59:50.554266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.667 [2024-07-25 01:59:50.554275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.667 [2024-07-25 01:59:50.554279] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554284] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=4096, cccid=0 00:16:35.667 [2024-07-25 01:59:50.554292] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f48c0) on tqpair(0x24ada90): expected_datao=0, payload_size=4096 00:16:35.667 [2024-07-25 01:59:50.554300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554312] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554320] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.667 [2024-07-25 01:59:50.554338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.667 [2024-07-25 01:59:50.554341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.667 [2024-07-25 01:59:50.554358] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:35.667 [2024-07-25 01:59:50.554367] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:35.667 [2024-07-25 01:59:50.554375] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:35.667 [2024-07-25 01:59:50.554388] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:35.667 [2024-07-25 01:59:50.554397] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:35.667 [2024-07-25 01:59:50.554404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:35.667 [2024-07-25 01:59:50.554414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:35.667 [2024-07-25 01:59:50.554422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.667 [2024-07-25 01:59:50.554471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.667 [2024-07-25 01:59:50.554521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.667 [2024-07-25 01:59:50.554533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.667 [2024-07-25 01:59:50.554540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.667 [2024-07-25 01:59:50.554555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.667 [2024-07-25 01:59:50.554577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.667 [2024-07-25 01:59:50.554599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.667 [2024-07-25 01:59:50.554632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.667 [2024-07-25 01:59:50.554662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:35.667 [2024-07-25 01:59:50.554677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:35.667 [2024-07-25 01:59:50.554688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.667 [2024-07-25 01:59:50.554695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.667 [2024-07-25 01:59:50.554706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.667 [2024-07-25 01:59:50.554737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f48c0, cid 0, qid 0 00:16:35.668 [2024-07-25 01:59:50.554745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4a40, cid 1, qid 0 00:16:35.668 [2024-07-25 01:59:50.554751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4bc0, cid 2, qid 0 00:16:35.668 [2024-07-25 01:59:50.554758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.668 [2024-07-25 01:59:50.554765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.668 [2024-07-25 01:59:50.554840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.668 [2024-07-25 01:59:50.554866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.668 [2024-07-25 01:59:50.554872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.554876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.668 [2024-07-25 01:59:50.554882] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:35.668 [2024-07-25 01:59:50.554887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.554897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.554904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.554914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.554921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.554927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.668 [2024-07-25 01:59:50.554939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.668 [2024-07-25 01:59:50.554971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.668 [2024-07-25 01:59:50.555022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.668 [2024-07-25 01:59:50.555033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.668 [2024-07-25 01:59:50.555040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.668 [2024-07-25 01:59:50.555118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.555137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.555151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.668 [2024-07-25 01:59:50.555163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.668 [2024-07-25 01:59:50.555187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.668 [2024-07-25 01:59:50.555250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.668 [2024-07-25 01:59:50.555263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.668 [2024-07-25 01:59:50.555271] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555277] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=4096, cccid=4 00:16:35.668 [2024-07-25 01:59:50.555283] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f4ec0) on tqpair(0x24ada90): expected_datao=0, payload_size=4096 00:16:35.668 [2024-07-25 01:59:50.555288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555296] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.668 [2024-07-25 01:59:50.555319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.668 [2024-07-25 01:59:50.555323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.668 [2024-07-25 01:59:50.555338] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:35.668 [2024-07-25 01:59:50.555354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.555372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.555382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.668 [2024-07-25 01:59:50.555401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.668 [2024-07-25 01:59:50.555430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.668 [2024-07-25 01:59:50.555499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.668 [2024-07-25 01:59:50.555508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.668 [2024-07-25 01:59:50.555512] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=4096, cccid=4 00:16:35.668 [2024-07-25 01:59:50.555521] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f4ec0) on tqpair(0x24ada90): expected_datao=0, payload_size=4096 00:16:35.668 [2024-07-25 01:59:50.555528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555544] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.668 [2024-07-25 01:59:50.555567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.668 [2024-07-25 01:59:50.555571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.668 [2024-07-25 01:59:50.555593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.555610] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.668 [2024-07-25 01:59:50.555623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.668 [2024-07-25 01:59:50.555635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.668 [2024-07-25 01:59:50.555664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.668 [2024-07-25 01:59:50.555749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.668 [2024-07-25 01:59:50.555759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.668 [2024-07-25 01:59:50.555764] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555770] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=4096, cccid=4 00:16:35.668 [2024-07-25 01:59:50.555777] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f4ec0) on tqpair(0x24ada90): expected_datao=0, payload_size=4096 00:16:35.668 [2024-07-25 01:59:50.555785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555796] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.668 [2024-07-25 01:59:50.555801] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.555810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.669 [2024-07-25 01:59:50.555820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.669 [2024-07-25 01:59:50.555826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.555831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.669 [2024-07-25 01:59:50.555841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555915] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.669 [2024-07-25 01:59:50.555920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:35.669 [2024-07-25 01:59:50.555926] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:35.669 [2024-07-25 01:59:50.555949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.555957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.555966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.555974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.555978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.555982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.555992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.669 [2024-07-25 01:59:50.556034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.669 [2024-07-25 01:59:50.556044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5040, cid 5, qid 0 00:16:35.669 [2024-07-25 01:59:50.556124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.669 [2024-07-25 01:59:50.556135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.669 [2024-07-25 01:59:50.556142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.669 [2024-07-25 01:59:50.556159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.669 [2024-07-25 01:59:50.556169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.669 [2024-07-25 01:59:50.556175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5040) on tqpair=0x24ada90 00:16:35.669 [2024-07-25 01:59:50.556193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5040, cid 5, qid 0 00:16:35.669 [2024-07-25 01:59:50.556272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.669 [2024-07-25 01:59:50.556281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.669 [2024-07-25 01:59:50.556287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5040) on tqpair=0x24ada90 00:16:35.669 [2024-07-25 01:59:50.556309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5040, cid 5, qid 0 00:16:35.669 [2024-07-25 01:59:50.556412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.669 [2024-07-25 01:59:50.556425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.669 [2024-07-25 01:59:50.556431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5040) on tqpair=0x24ada90 00:16:35.669 [2024-07-25 01:59:50.556453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5040, cid 5, qid 0 00:16:35.669 [2024-07-25 01:59:50.556545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.669 [2024-07-25 01:59:50.556554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.669 [2024-07-25 01:59:50.556558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5040) on tqpair=0x24ada90 00:16:35.669 [2024-07-25 01:59:50.556585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.669 [2024-07-25 01:59:50.556669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x24ada90) 00:16:35.669 [2024-07-25 01:59:50.556678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.669 [2024-07-25 01:59:50.556710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5040, cid 5, qid 0 00:16:35.669 [2024-07-25 01:59:50.556718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4ec0, cid 4, qid 0 00:16:35.669 [2024-07-25 01:59:50.556724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f51c0, cid 6, qid 0 00:16:35.669 [2024-07-25 01:59:50.556729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5340, cid 7, qid 0 00:16:35.669 ===================================================== 00:16:35.669 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.669 ===================================================== 00:16:35.669 Controller Capabilities/Features 00:16:35.669 ================================ 00:16:35.669 Vendor ID: 8086 00:16:35.669 Subsystem Vendor ID: 8086 00:16:35.669 Serial Number: SPDK00000000000001 00:16:35.669 Model Number: SPDK bdev Controller 00:16:35.669 Firmware Version: 24.09 00:16:35.669 Recommended Arb Burst: 6 00:16:35.669 IEEE OUI Identifier: e4 d2 5c 00:16:35.669 Multi-path I/O 00:16:35.669 May have multiple subsystem ports: Yes 00:16:35.669 May have multiple controllers: Yes 00:16:35.669 Associated with SR-IOV VF: No 00:16:35.669 Max Data Transfer Size: 131072 00:16:35.669 Max Number of Namespaces: 32 00:16:35.669 Max Number of I/O Queues: 127 00:16:35.669 NVMe Specification Version (VS): 1.3 00:16:35.669 NVMe Specification Version (Identify): 1.3 00:16:35.669 Maximum Queue Entries: 128 00:16:35.669 Contiguous Queues Required: Yes 00:16:35.669 Arbitration Mechanisms Supported 00:16:35.669 Weighted Round Robin: Not Supported 00:16:35.669 Vendor Specific: Not Supported 00:16:35.669 Reset Timeout: 15000 ms 00:16:35.669 Doorbell Stride: 4 bytes 00:16:35.669 NVM Subsystem Reset: Not Supported 00:16:35.669 Command Sets Supported 00:16:35.669 NVM Command Set: Supported 00:16:35.669 Boot Partition: Not Supported 00:16:35.669 Memory Page Size Minimum: 4096 bytes 00:16:35.669 Memory Page Size Maximum: 4096 bytes 00:16:35.669 Persistent Memory Region: Not Supported 00:16:35.669 Optional Asynchronous Events Supported 00:16:35.669 Namespace Attribute Notices: Supported 00:16:35.669 Firmware Activation Notices: Not Supported 00:16:35.669 ANA Change Notices: Not Supported 00:16:35.669 PLE Aggregate Log Change Notices: Not Supported 00:16:35.669 LBA Status Info Alert Notices: Not Supported 00:16:35.669 EGE Aggregate Log Change Notices: Not Supported 00:16:35.669 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.669 Zone Descriptor Change Notices: Not Supported 00:16:35.669 Discovery Log Change Notices: Not Supported 00:16:35.669 Controller Attributes 00:16:35.669 128-bit Host Identifier: Supported 00:16:35.669 Non-Operational Permissive Mode: Not Supported 00:16:35.669 NVM Sets: Not Supported 00:16:35.669 Read Recovery Levels: Not Supported 00:16:35.669 Endurance Groups: Not Supported 00:16:35.670 Predictable Latency Mode: Not Supported 00:16:35.670 Traffic Based Keep ALive: Not Supported 00:16:35.670 Namespace Granularity: Not Supported 00:16:35.670 SQ Associations: Not Supported 00:16:35.670 UUID List: Not Supported 00:16:35.670 Multi-Domain Subsystem: Not Supported 00:16:35.670 Fixed Capacity Management: Not Supported 00:16:35.670 Variable Capacity Management: Not Supported 00:16:35.670 Delete Endurance Group: Not Supported 00:16:35.670 Delete NVM Set: Not Supported 00:16:35.670 Extended LBA Formats Supported: Not Supported 00:16:35.670 Flexible Data Placement Supported: Not Supported 00:16:35.670 00:16:35.670 Controller Memory Buffer Support 00:16:35.670 ================================ 00:16:35.670 Supported: No 00:16:35.670 00:16:35.670 Persistent Memory Region Support 00:16:35.670 ================================ 00:16:35.670 Supported: No 00:16:35.670 00:16:35.670 Admin Command Set Attributes 00:16:35.670 ============================ 00:16:35.670 Security Send/Receive: Not Supported 00:16:35.670 Format NVM: Not Supported 00:16:35.670 Firmware Activate/Download: Not Supported 00:16:35.670 Namespace Management: Not Supported 00:16:35.670 Device Self-Test: Not Supported 00:16:35.670 Directives: Not Supported 00:16:35.670 NVMe-MI: Not Supported 00:16:35.670 Virtualization Management: Not Supported 00:16:35.670 Doorbell Buffer Config: Not Supported 00:16:35.670 Get LBA Status Capability: Not Supported 00:16:35.670 Command & Feature Lockdown Capability: Not Supported 00:16:35.670 Abort Command Limit: 4 00:16:35.670 Async Event Request Limit: 4 00:16:35.670 Number of Firmware Slots: N/A 00:16:35.670 Firmware Slot 1 Read-Only: N/A 00:16:35.670 Firmware Activation Without Reset: N/A 00:16:35.670 Multiple Update Detection Support: N/A 00:16:35.670 Firmware Update Granularity: No Information Provided 00:16:35.670 Per-Namespace SMART Log: No 00:16:35.670 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.670 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:35.670 Command Effects Log Page: Supported 00:16:35.670 Get Log Page Extended Data: Supported 00:16:35.670 Telemetry Log Pages: Not Supported 00:16:35.670 Persistent Event Log Pages: Not Supported 00:16:35.670 Supported Log Pages Log Page: May Support 00:16:35.670 Commands Supported & Effects Log Page: Not Supported 00:16:35.670 Feature Identifiers & Effects Log Page:May Support 00:16:35.670 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.670 Data Area 4 for Telemetry Log: Not Supported 00:16:35.670 Error Log Page Entries Supported: 128 00:16:35.670 Keep Alive: Supported 00:16:35.670 Keep Alive Granularity: 10000 ms 00:16:35.670 00:16:35.670 NVM Command Set Attributes 00:16:35.670 ========================== 00:16:35.670 Submission Queue Entry Size 00:16:35.670 Max: 64 00:16:35.670 Min: 64 00:16:35.670 Completion Queue Entry Size 00:16:35.670 Max: 16 00:16:35.670 Min: 16 00:16:35.670 Number of Namespaces: 32 00:16:35.670 Compare Command: Supported 00:16:35.670 Write Uncorrectable Command: Not Supported 00:16:35.670 Dataset Management Command: Supported 00:16:35.670 Write Zeroes Command: Supported 00:16:35.670 Set Features Save Field: Not Supported 00:16:35.670 Reservations: Supported 00:16:35.670 Timestamp: Not Supported 00:16:35.670 Copy: Supported 00:16:35.670 Volatile Write Cache: Present 00:16:35.670 Atomic Write Unit (Normal): 1 00:16:35.670 Atomic Write Unit (PFail): 1 00:16:35.670 Atomic Compare & Write Unit: 1 00:16:35.670 Fused Compare & Write: Supported 00:16:35.670 Scatter-Gather List 00:16:35.670 SGL Command Set: Supported 00:16:35.670 SGL Keyed: Supported 00:16:35.670 SGL Bit Bucket Descriptor: Not Supported 00:16:35.670 SGL Metadata Pointer: Not Supported 00:16:35.670 Oversized SGL: Not Supported 00:16:35.670 SGL Metadata Address: Not Supported 00:16:35.670 SGL Offset: Supported 00:16:35.670 Transport SGL Data Block: Not Supported 00:16:35.670 Replay Protected Memory Block: Not Supported 00:16:35.670 00:16:35.670 Firmware Slot Information 00:16:35.670 ========================= 00:16:35.670 Active slot: 1 00:16:35.670 Slot 1 Firmware Revision: 24.09 00:16:35.670 00:16:35.670 00:16:35.670 Commands Supported and Effects 00:16:35.670 ============================== 00:16:35.670 Admin Commands 00:16:35.670 -------------- 00:16:35.670 Get Log Page (02h): Supported 00:16:35.670 Identify (06h): Supported 00:16:35.670 Abort (08h): Supported 00:16:35.670 Set Features (09h): Supported 00:16:35.670 Get Features (0Ah): Supported 00:16:35.670 Asynchronous Event Request (0Ch): Supported 00:16:35.670 Keep Alive (18h): Supported 00:16:35.670 I/O Commands 00:16:35.670 ------------ 00:16:35.670 Flush (00h): Supported LBA-Change 00:16:35.670 Write (01h): Supported LBA-Change 00:16:35.670 Read (02h): Supported 00:16:35.670 Compare (05h): Supported 00:16:35.670 Write Zeroes (08h): Supported LBA-Change 00:16:35.670 Dataset Management (09h): Supported LBA-Change 00:16:35.670 Copy (19h): Supported LBA-Change 00:16:35.670 00:16:35.670 Error Log 00:16:35.670 ========= 00:16:35.670 00:16:35.670 Arbitration 00:16:35.670 =========== 00:16:35.670 Arbitration Burst: 1 00:16:35.670 00:16:35.670 Power Management 00:16:35.670 ================ 00:16:35.670 Number of Power States: 1 00:16:35.670 Current Power State: Power State #0 00:16:35.670 Power State #0: 00:16:35.670 Max Power: 0.00 W 00:16:35.670 Non-Operational State: Operational 00:16:35.670 Entry Latency: Not Reported 00:16:35.670 Exit Latency: Not Reported 00:16:35.670 Relative Read Throughput: 0 00:16:35.670 Relative Read Latency: 0 00:16:35.670 Relative Write Throughput: 0 00:16:35.670 Relative Write Latency: 0 00:16:35.670 Idle Power: Not Reported 00:16:35.670 Active Power: Not Reported 00:16:35.670 Non-Operational Permissive Mode: Not Supported 00:16:35.670 00:16:35.670 Health Information 00:16:35.670 ================== 00:16:35.670 Critical Warnings: 00:16:35.670 Available Spare Space: OK 00:16:35.670 Temperature: OK 00:16:35.670 Device Reliability: OK 00:16:35.670 Read Only: No 00:16:35.670 Volatile Memory Backup: OK 00:16:35.670 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:35.670 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.670 Available Spare: 0% 00:16:35.670 Available Spare Threshold: 0% 00:16:35.670 Life Percentage Used:[2024-07-25 01:59:50.560958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.670 [2024-07-25 01:59:50.560977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.670 [2024-07-25 01:59:50.560983] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.560987] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=8192, cccid=5 00:16:35.670 [2024-07-25 01:59:50.560992] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f5040) on tqpair(0x24ada90): expected_datao=0, payload_size=8192 00:16:35.670 [2024-07-25 01:59:50.560997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561020] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561026] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.670 [2024-07-25 01:59:50.561038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.670 [2024-07-25 01:59:50.561042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561046] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=512, cccid=4 00:16:35.670 [2024-07-25 01:59:50.561051] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f4ec0) on tqpair(0x24ada90): expected_datao=0, payload_size=512 00:16:35.670 [2024-07-25 01:59:50.561055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561062] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561066] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.670 [2024-07-25 01:59:50.561078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.670 [2024-07-25 01:59:50.561081] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561085] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=512, cccid=6 00:16:35.670 [2024-07-25 01:59:50.561090] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f51c0) on tqpair(0x24ada90): expected_datao=0, payload_size=512 00:16:35.670 [2024-07-25 01:59:50.561094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561101] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561104] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:35.670 [2024-07-25 01:59:50.561116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:35.670 [2024-07-25 01:59:50.561120] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:35.670 [2024-07-25 01:59:50.561124] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24ada90): datao=0, datal=4096, cccid=7 00:16:35.671 [2024-07-25 01:59:50.561128] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f5340) on tqpair(0x24ada90): expected_datao=0, payload_size=4096 00:16:35.671 [2024-07-25 01:59:50.561133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561140] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561144] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5040) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4ec0) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f51c0) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5340) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.561392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.561418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f5340, cid 7, qid 0 00:16:35.671 [2024-07-25 01:59:50.561484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f5340) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561537] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:35.671 [2024-07-25 01:59:50.561549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f48c0) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.671 [2024-07-25 01:59:50.561561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4a40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.671 [2024-07-25 01:59:50.561571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4bc0) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.671 [2024-07-25 01:59:50.561580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.671 [2024-07-25 01:59:50.561593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.561609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.561630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.561677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.561716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.561736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.561816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561835] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:35.671 [2024-07-25 01:59:50.561840] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:35.671 [2024-07-25 01:59:50.561866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.561882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.561912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.561967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.561974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.561978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.561993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.561998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.562009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.562029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.562074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.562080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.562084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.562099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.562115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.562131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.562206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.562214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.562218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.562231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.562246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.562262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.562305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.562311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.562314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.562328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.562343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.671 [2024-07-25 01:59:50.562358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.671 [2024-07-25 01:59:50.562403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.671 [2024-07-25 01:59:50.562409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.671 [2024-07-25 01:59:50.562412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.671 [2024-07-25 01:59:50.562426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.671 [2024-07-25 01:59:50.562434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.671 [2024-07-25 01:59:50.562441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.562456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.562501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.562507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.562510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.562524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.562538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.562554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.562596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.562602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.562607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.562621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.562636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.562652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.562698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.562704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.562707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.562721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.562736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.562751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.562793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.562799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.562803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.562816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.562831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.562863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.562921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.562929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.562933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.562947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.562955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.562962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.562980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.563021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.563027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.563031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.563046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.563062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.563078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.563127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.563133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.563137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.563151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.563166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.563197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.563239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.563245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.563249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.563262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.563277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.563292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.563339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.563345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.563349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.672 [2024-07-25 01:59:50.563363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.672 [2024-07-25 01:59:50.563377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.672 [2024-07-25 01:59:50.563393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.672 [2024-07-25 01:59:50.563438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.672 [2024-07-25 01:59:50.563444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.672 [2024-07-25 01:59:50.563448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.672 [2024-07-25 01:59:50.563452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.563462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.563478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.563493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.563544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.563550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.563553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.563567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.563584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.563600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.563641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.563647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.563651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.563664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.563679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.563694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.563774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.563782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.563786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.563802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.563825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.563865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.563917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.563928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.563935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.563960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.563971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.563982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.564012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.564071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.564081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.564085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.564106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.564131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.564184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.564229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.564247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.564253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.564275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.564293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.564313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.564364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.564375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.564379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.564395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.564411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.564430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.564472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.564488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.564496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.564518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.564537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.564564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.564618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.673 [2024-07-25 01:59:50.564635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.673 [2024-07-25 01:59:50.564643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.673 [2024-07-25 01:59:50.564665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.673 [2024-07-25 01:59:50.564674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.673 [2024-07-25 01:59:50.564682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.673 [2024-07-25 01:59:50.564707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.673 [2024-07-25 01:59:50.564752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.674 [2024-07-25 01:59:50.564759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.674 [2024-07-25 01:59:50.564763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.564769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.674 [2024-07-25 01:59:50.564784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.564792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.564798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.674 [2024-07-25 01:59:50.564809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.674 [2024-07-25 01:59:50.564829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.674 [2024-07-25 01:59:50.568935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.674 [2024-07-25 01:59:50.568953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.674 [2024-07-25 01:59:50.568958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.568963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.674 [2024-07-25 01:59:50.568985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.568990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.568994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24ada90) 00:16:35.674 [2024-07-25 01:59:50.569003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.674 [2024-07-25 01:59:50.569030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f4d40, cid 3, qid 0 00:16:35.674 [2024-07-25 01:59:50.569096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:35.674 [2024-07-25 01:59:50.569108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:35.674 [2024-07-25 01:59:50.569115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:35.674 [2024-07-25 01:59:50.569121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f4d40) on tqpair=0x24ada90 00:16:35.674 [2024-07-25 01:59:50.569131] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:16:35.674 0% 00:16:35.674 Data Units Read: 0 00:16:35.674 Data Units Written: 0 00:16:35.674 Host Read Commands: 0 00:16:35.674 Host Write Commands: 0 00:16:35.674 Controller Busy Time: 0 minutes 00:16:35.674 Power Cycles: 0 00:16:35.674 Power On Hours: 0 hours 00:16:35.674 Unsafe Shutdowns: 0 00:16:35.674 Unrecoverable Media Errors: 0 00:16:35.674 Lifetime Error Log Entries: 0 00:16:35.674 Warning Temperature Time: 0 minutes 00:16:35.674 Critical Temperature Time: 0 minutes 00:16:35.674 00:16:35.674 Number of Queues 00:16:35.674 ================ 00:16:35.674 Number of I/O Submission Queues: 127 00:16:35.674 Number of I/O Completion Queues: 127 00:16:35.674 00:16:35.674 Active Namespaces 00:16:35.674 ================= 00:16:35.674 Namespace ID:1 00:16:35.674 Error Recovery Timeout: Unlimited 00:16:35.674 Command Set Identifier: NVM (00h) 00:16:35.674 Deallocate: Supported 00:16:35.674 Deallocated/Unwritten Error: Not Supported 00:16:35.674 Deallocated Read Value: Unknown 00:16:35.674 Deallocate in Write Zeroes: Not Supported 00:16:35.674 Deallocated Guard Field: 0xFFFF 00:16:35.674 Flush: Supported 00:16:35.674 Reservation: Supported 00:16:35.674 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.674 Size (in LBAs): 131072 (0GiB) 00:16:35.674 Capacity (in LBAs): 131072 (0GiB) 00:16:35.674 Utilization (in LBAs): 131072 (0GiB) 00:16:35.674 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:35.674 EUI64: ABCDEF0123456789 00:16:35.674 UUID: 2dd4bcb3-bc63-428b-9b08-cbc860ce26d8 00:16:35.674 Thin Provisioning: Not Supported 00:16:35.674 Per-NS Atomic Units: Yes 00:16:35.674 Atomic Boundary Size (Normal): 0 00:16:35.674 Atomic Boundary Size (PFail): 0 00:16:35.674 Atomic Boundary Offset: 0 00:16:35.674 Maximum Single Source Range Length: 65535 00:16:35.674 Maximum Copy Length: 65535 00:16:35.674 Maximum Source Range Count: 1 00:16:35.674 NGUID/EUI64 Never Reused: No 00:16:35.674 Namespace Write Protected: No 00:16:35.674 Number of LBA Formats: 1 00:16:35.674 Current LBA Format: LBA Format #00 00:16:35.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.674 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.674 rmmod nvme_tcp 00:16:35.674 rmmod nvme_fabrics 00:16:35.674 rmmod nvme_keyring 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 88823 ']' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 88823 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 88823 ']' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 88823 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88823 00:16:35.674 killing process with pid 88823 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88823' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 88823 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 88823 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.674 00:16:35.674 real 0m1.708s 00:16:35.674 user 0m3.959s 00:16:35.674 sys 0m0.552s 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.674 01:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.674 ************************************ 00:16:35.674 END TEST nvmf_identify 00:16:35.674 ************************************ 00:16:35.934 01:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:35.934 01:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.934 01:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.934 01:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.934 ************************************ 00:16:35.934 START TEST nvmf_perf 00:16:35.934 ************************************ 00:16:35.934 01:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:35.934 * Looking for test storage... 00:16:35.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f42f786-7175-4746-b686-8365485f4d3d 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=6f42f786-7175-4746-b686-8365485f4d3d 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.934 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:35.935 Cannot find device "nvmf_tgt_br" 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.935 Cannot find device "nvmf_tgt_br2" 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:35.935 Cannot find device "nvmf_tgt_br" 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:35.935 Cannot find device "nvmf_tgt_br2" 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:35.935 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.194 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:36.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:36.195 00:16:36.195 --- 10.0.0.2 ping statistics --- 00:16:36.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.195 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:36.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:16:36.195 00:16:36.195 --- 10.0.0.3 ping statistics --- 00:16:36.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.195 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:36.195 00:16:36.195 --- 10.0.0.1 ping statistics --- 00:16:36.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.195 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=89017 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 89017 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 89017 ']' 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.195 01:59:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:36.454 [2024-07-25 01:59:51.525954] Starting SPDK v24.09-pre git sha1 d005e023b / DPDK 24.07.0-rc3 initialization... 00:16:36.454 [2024-07-25 01:59:51.526080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.454 [2024-07-25 01:59:51.649658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:36.454 [2024-07-25 01:59:51.669406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.454 [2024-07-25 01:59:51.713713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.454 [2024-07-25 01:59:51.713797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.454 [2024-07-25 01:59:51.713822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.454 [2024-07-25 01:59:51.713832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.454 [2024-07-25 01:59:51.713874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.454 [2024-07-25 01:59:51.713957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.454 [2024-07-25 01:59:51.714685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.454 [2024-07-25 01:59:51.715051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.454 [2024-07-25 01:59:51.715106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.454 [2024-07-25 01:59:51.751617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:37.390 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:37.649 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:37.649 01:59:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:37.908 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:37.908 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:38.170 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:38.170 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:38.170 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:38.170 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:38.170 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:38.431 [2024-07-25 01:59:53.631129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.431 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:38.690 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:38.690 01:59:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.948 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:38.948 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:39.208 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.468 [2024-07-25 01:59:54.548567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.727 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:39.727 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:39.727 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:39.727 01:59:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:40.663 Initializing NVMe Controllers 00:16:40.663 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:40.663 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:40.663 Initialization complete. Launching workers. 00:16:40.663 ======================================================== 00:16:40.663 Latency(us) 00:16:40.663 Device Information : IOPS MiB/s Average min max 00:16:40.663 PCIE (0000:00:10.0) NSID 1 from core 0: 23999.00 93.75 1333.05 349.78 7960.64 00:16:40.663 ======================================================== 00:16:40.663 Total : 23999.00 93.75 1333.05 349.78 7960.64 00:16:40.663 00:16:40.663 01:59:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:21.537 Cancelling nested steps due to timeout 00:32:21.540 Sending interrupt signal to process 00:32:32.692 script returned exit code 255 00:32:32.697 [Pipeline] } 00:32:32.720 [Pipeline] // timeout 00:32:32.729 [Pipeline] } 00:32:32.750 [Pipeline] // stage 00:32:32.758 [Pipeline] } 00:32:32.763 Timeout has been exceeded 00:32:32.764 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 77b12fc8-9c43-4330-8962-1714871186bf 00:32:32.764 Setting overall build result to ABORTED 00:32:32.783 [Pipeline] // catchError 00:32:32.793 [Pipeline] stage 00:32:32.796 [Pipeline] { (Stop VM) 00:32:32.810 [Pipeline] sh 00:32:33.091 + vagrant halt 00:32:37.277 ==> default: Halting domain... 00:32:42.552 [Pipeline] sh 00:32:42.831 + vagrant destroy -f 00:32:46.116 ==> default: Removing domain... 00:32:46.128 [Pipeline] sh 00:32:46.407 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:32:46.415 [Pipeline] } 00:32:46.431 [Pipeline] // stage 00:32:46.437 [Pipeline] } 00:32:46.452 [Pipeline] // dir 00:32:46.457 [Pipeline] } 00:32:46.473 [Pipeline] // wrap 00:32:46.478 [Pipeline] } 00:32:46.490 [Pipeline] // catchError 00:32:46.497 [Pipeline] stage 00:32:46.499 [Pipeline] { (Epilogue) 00:32:46.511 [Pipeline] sh 00:32:46.792 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:48.706 [Pipeline] catchError 00:32:48.708 [Pipeline] { 00:32:48.722 [Pipeline] sh 00:32:49.013 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:49.284 Artifacts sizes are good 00:32:49.293 [Pipeline] } 00:32:49.310 [Pipeline] // catchError 00:32:49.320 [Pipeline] archiveArtifacts 00:32:49.326 Archiving artifacts 00:32:49.513 [Pipeline] cleanWs 00:32:49.524 [WS-CLEANUP] Deleting project workspace... 00:32:49.524 [WS-CLEANUP] Deferred wipeout is used... 00:32:49.529 [WS-CLEANUP] done 00:32:49.531 [Pipeline] } 00:32:49.547 [Pipeline] // stage 00:32:49.552 [Pipeline] } 00:32:49.568 [Pipeline] // node 00:32:49.573 [Pipeline] End of Pipeline 00:32:49.609 Finished: ABORTED